All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1... See more...
Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",su... See more...
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response
Never mind. I posted too soon. I replaced "| addcoltotals label=Total " with "| addcoltotals labelfield="Vision ID" label="Total"" and it worked. Thanks.
Use the labelfield option.  It tells Splunk into which field (column) to put the total. ... | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Aver... See more...
Use the labelfield option.  It tells Splunk into which field (column) to put the total. ... | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fiid as "Vision ID" | addcoltotals label=Total labelfield=bank_fiid ...  
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visually... See more...
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response Modified this query like below and not getting any results | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1 can anyone please help me to achieve this? Thanks in advance.
Issue is resolved. I just have to tell Splunk that the time you receive is in UTC and it worked. In props.conf, I just add this: TZ=UTC   @richgalloway  and @PickleRick  appreciate the effort
Thank you, I will investigate this as well to see what works best.
Thank you, I will investigate this.
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Th... See more...
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Thanks.  index=etims_na sourcetype=etims_prod platformId=5 bank_fiid=CHUA | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Transactions count(eval(response_time <= 1)) AS "Good" count(eval(response_time <= 2)) AS "Fair" count(eval(response_time > 2)) AS "Unacceptable" avg(response_time) AS "Average" BY bank_fiid | eval "%Good"=(Good/Transactions)*100 | eval "%Fair"=(Fair/Transactions)*100 | eval "%Unacceptable"=(Unacceptable/Transactions)*100 | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fiid as "Vision ID" | addcoltotals label=Total | append [|makeresults | eval "Vision ID"="Threshold" | eval "Good" = "response <= 1s" | eval "Fair" = "1s < response <= 3s" | eval "Unacceptable" = "3s < response"] | fields - _time
@ASierra  -  We have run into the security group count issue. Currently, we map security groups via the 'SAML Groups' page, where the security group I maps to a specific role. @jpondrom_splunk offe... See more...
@ASierra  -  We have run into the security group count issue. Currently, we map security groups via the 'SAML Groups' page, where the security group I maps to a specific role. @jpondrom_splunk offers a solution that would require adding the app roles from Azure into the 'SAML Groups', then modify the 'SAML Configuration' 'Role Alias' to a different value. This would retool all of the SAML authentication from 'groups' to 'roles'. Clarifying question(s) on step 1: Did you mean 'Add the Azure App Roles to the SAML Groups'? Or is it the case that having mapped Azure Security Groups to Splunk Roles via the 'SAML Groups' interface, I don't need to do anything else?  It would be wonderful if a simple Azure change could provide the fix. I'm not averse to adding the Azure App Roles to the SAML Groups mapping, but the method you offer is much easier.
Of course. With a single disk installation a single disk also takes down whole machine But seriously - it's just a matter of risk and cost management. Some users can accept the risk of the whole ... See more...
Of course. With a single disk installation a single disk also takes down whole machine But seriously - it's just a matter of risk and cost management. Some users can accept the risk of the whole machine going down knowing that the machine is cheaper (and possibly a bit faster). But I agree, the storage is relatively cheap nowadays. One important caveat: I'm of course talking about components which are replicated (indexers, search heads). You probably don't want a RAID0-based machine as CM.
Hi @Anamika.David, Did you find a solution or any new information you can share about your question?
Hi @Alex_Rus , if the disk is always mounted with the same name, you can put it in your inputs.conf: [monitor://E:\my_foler\my_files.log] Ciao. Giuseppe
I try to avoid RAID0 because loss of a single disk takes down the entire array.  I won't be bitten by that again.
IIRC, the _introspection index has disk space usage data for searches.  I still question the utility of that information, however, since the usage is not cumulative.
Hi Splunkers,   I'm trying to compare the policy names from Today with policy names from past 48 hours to see if there is any change in policy names. I tried using append as well as join to compare... See more...
Hi Splunkers,   I'm trying to compare the policy names from Today with policy names from past 48 hours to see if there is any change in policy names. I tried using append as well as join to compare the results from last 48 hours with Today's timeframe. But, I'm unable to get the expected output or result. Ex: In the below table I'm trying to see if there are any changes in policy names from last 48 hours. So, policy_3_sf's name is changed to policy_3_sk. Similarly, policy_4_sg and policy_5_gh names are changed to policy_4_sp and policy_5_gk respectively and are the new names I would like to list through my query as per the requirement. Last_48_Hours_Policy_Names Today_Policy_Names New_Policy_Names policy_1_xx policy_1_xx   policy_2_xs policy_2_xs   policy_3_sf policy_3_sk policy_3_sk policy_4_sg policy_4_sp policy_4_sp policy_5_gh policy_5_gk policy_5_gk Could you please let me know if my approach is correct or if something is missing in my queries? Thanks,
Hi @Shubham.Kadam, The Community has not jumped in to help. Did you happen to find a solution yourself you can share? If you're still looking for help, you an contact AppDyanmics Support. AppDyn... See more...
Hi @Shubham.Kadam, The Community has not jumped in to help. Did you happen to find a solution yourself you can share? If you're still looking for help, you an contact AppDyanmics Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If you do, and get a solution, can you please come back and share the solution here.
Tell us more.  What exactly do you mean by "does not work"?  What results/errors do you get?  What is the inputs.conf stanza for that input?
Hi @Ravi.Rajangam, Thanks for updating your post and sharing the information that answered your question. 
Hi @Ravi.Rajangam, I think this would be the page you are looking for: https://docs.appdynamics.com/appd/23.x/23.12/en/appdynamics-essentials/alert-and-respond/actions/notification-actions