All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the prompt response! 
Thank you for introducing msearch aka mpreview.  As I mentioned before, mstats doesn't allow filtering by value.  So, you need to take care of stats after mpreview.  Something like | mpreview index=... See more...
Thank you for introducing msearch aka mpreview.  As I mentioned before, mstats doesn't allow filtering by value.  So, you need to take care of stats after mpreview.  Something like | mpreview index=itsi_im_metrics | search Dimension.id IN ("*Process.aspx") calc:service.thaa_stress_requests_count_lr_tags>0 calc:service.thaa_stress_requests_lr_tags > 0 | stats sum(calc:service.thaa_stress_requests_count_lr_tags) As "Count" , avg(calc:service.thaa_stress_requests_lr_tags) As "Response" by Dimension.id | eval Response=round((Response/1000000),2), Count=round(Count,0)
There are a couple of ways to get the desired field from the ID. | rex field=ID "-(?<Delimited_ID>[^-]+)" ``` OR ``` | eval tmp = split(ID, "-") | eval Delimited_ID = mvindex(tmp,1) Use the new fie... See more...
There are a couple of ways to get the desired field from the ID. | rex field=ID "-(?<Delimited_ID>[^-]+)" ``` OR ``` | eval tmp = split(ID, "-") | eval Delimited_ID = mvindex(tmp,1) Use the new field in a stats command just as you would any other field. | stats count as Count by Delimited_ID, HTTP_responsecode  
To have different cron schedules you have to clone the alert and set a separate schedule for each copy.
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats comman... See more...
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats command with the delimited value by HTTP_responsecode Something as below  Delimited_ID HTTP_responsecode Count YYYY 200 10   Please could you help on how to delimit the value in the above format mentioned and how to use the new delimited value in a stats command 
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head. ... See more...
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head.      Thanks in advance! 
Hi @richgalloway , thank you for that. i have one more question, can u pls help on this I want a cron where alert should trigger  4 times a day starting from 12am, 6am, 12pm, 6 pm and weekday only ... See more...
Hi @richgalloway , thank you for that. i have one more question, can u pls help on this I want a cron where alert should trigger  4 times a day starting from 12am, 6am, 12pm, 6 pm and weekday only at 6am everyday
You can specify the exact hours you want the alert to run. 0 0,6,12 * * *
We have an alert where the cron schedule runs for every 6hours 0 */6 * * * but I don’t want to receive the alert at 6pm only how can I write a corn for that???
I don't have experience with that particular app but in theory it should work. Give it a try!
Can you troubleshoot that Splunk is applying the props and transforms to the logs?    E.g. what do your inputs.conf and props.conf stanzas look like for this log type, and on which Splunk machines ... See more...
Can you troubleshoot that Splunk is applying the props and transforms to the logs?    E.g. what do your inputs.conf and props.conf stanzas look like for this log type, and on which Splunk machines are the inputs.conf and props.conf files placed
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm l... See more...
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm launching my FastAPI app with the following command to include the AppDynamics agent: pyagent run -c appdynamics.cfg uvicorn my_app:app --reload My goal is to reduce the verbosity of the logs from both the AppDynamics agent and the proxy that are output to stdout, aiming to keep my console output clean and focused on more critical issues. My module versions: $ pip freeze | grep appdy appdynamics==23.10.0.6327 appdynamics-bindeps-linux-x64==23.10.0 appdynamics-proxysupport-linux-x64==11.64.3 Here's the content of my `appdynamics.cfg` configuration file: [agent] app = my-app tier = my-tier node = teste-local-01 [controller] host = my-controller.saas.appdynamics.com port = 443 ssl = true account = my-account accesskey = my-key [log] level = warning debugging = off I attempted to decrease the log verbosity further by modifying the `log4j.xml` file for the proxy to set the logging level to WARNING. However, this change didn't have the effect I was hoping for. The `log4j.xml` file I adjusted is located at: /tmp/appd/lib/cp311-cp311-63ff661bc175896c1717899ca23edc8f5fa87629d9e3bcd02cf4303ea4836f9f/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j.xml Here are the adjustments I made to the `log4j.xml`: <appender class="com.singularity.util.org.apache.log4j.ConsoleAppender" name="ConsoleAppender"> <layout class="com.singularity.util.org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p [%t] %c{1} - %m%n" /> </layout> <filter class="com.singularity.util.org.apache.log4j.varia.LevelRangeFilter"> <param name="LevelMax" value="FATAL" /> <param name="LevelMin" value="WARNING" /> </filter> Despite these efforts, I'm still seeing a high volume of logs from both the agent and proxy. Could anyone provide guidance or suggestions on how to effectively lower the log output to stdout for both the AppDynamics Python Agent and its proxy? Any tips on ensuring my changes to `log4j.xml` are correctly applied would also be greatly appreciated. Thank you in advance for your help! Example of logging messages I would like to remove from my stdout: 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 ... [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:51 BRT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf] ... 11:15:52,167 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Starting BT Logs at Sat Mar 23 11:15:52 BRT 2024 11:15:52,168 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - ########################################################### 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Using Proxy Version [Python Agent v23.10.0.6327 (proxy v23.10.0.35234) compatible with 4.5.0.21130 Python Version 3.11.6] 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] JavaAgent - Logging set up for log4j2 ... 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] JDBCConfiguration - Setting normalizePreparedStatements to true 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] CallGraphConfigHandler - Call Graph Config Changed callgraph-granularity-in-ms Value -null
Thanks @bowesmana @ITWhisperer . I was able to concoct a solution based on our inputs. @bowesmanaIn your below solution count=1 can also denote if the value was present in the live search and no... See more...
Thanks @bowesmana @ITWhisperer . I was able to concoct a solution based on our inputs. @bowesmanaIn your below solution count=1 can also denote if the value was present in the live search and not in the inputlookup. However, your solution of using append works in this case. | stats count by Time Value | append [ | inputlookup lookup.csv ``` Filter the entries you expect here, e.g. using addinfo ``` ``` | where Time is in the range you want ``` ] | stats count by Time Value | where count=1 @ITWhispererYour solution of using a variable Flag worked for me as it also handled the scenario where a particular value was found only in the live index search but not on the lookup. Thanks for this. <your index> | bin _time as period_start span=1h | dedup period_start Value | eval flag = 1 | append [| inputlookup lookup.csv [ eval period_start = ``` convert your time period here ``` | eval flag = 2] | stats sum(flag) as flag by period_start Value ``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ``` | where flag = 2  
using mpreview command to explore the results  | mpreview index=itsi_im_metrics | search "calc:service.thaa_stress_requests_lr_tags" "Dimension.id"="*Process.aspx" Those values with '0' is not ac... See more...
using mpreview command to explore the results  | mpreview index=itsi_im_metrics | search "calc:service.thaa_stress_requests_lr_tags" "Dimension.id"="*Process.aspx" Those values with '0' is not actual response, for some reason these entries are there and its affecting the overall average response. so wanted to remove those values from the calculation.    instead of this --> (4115725 + 0 + 3692799) / 3, i want this --> (4115725 + 3692799) / 2
Hi @pop345 , if you need to compare an ip addres from a lookup with one or more fields in the index events, you have two choices: search one by one all the fields (in this example only src and dest... See more...
Hi @pop345 , if you need to compare an ip addres from a lookup with one or more fields in the index events, you have two choices: search one by one all the fields (in this example only src and dest, but you can use more fields: index="activity" ([ | inputlookup activity2 | rename lb AS src | fields src ] OR [ | inputlookup activity2 | rename lb AS dest | fields dest ]) | ...  search as full text: index="activity" [ | inputlookup activity2 | rename lb AS query | fields query ] | ... with this second solution you search the lookup IPs also outside of the fields. Ciao. Giuseppe
I want a sample query that will guide in creating one for both TA and Kvstore 
All those errors should be on internal logs. Currently quite many TAs are using those too. Those have own log files as a source in _internal. You should just query those from it and look what you have.
I do not believe that mstats supports filtering by metric value.  But the question is also too vague.  Maybe you can explain your use case?  What does "remove values with '0'" mean?  From what calcul... See more...
I do not believe that mstats supports filtering by metric value.  But the question is also too vague.  Maybe you can explain your use case?  What does "remove values with '0'" mean?  From what calculation?  Given the three sample values you illustrated for metric calc:service.thaa_stress_requests_lr_tags, what is desired result from avg(calc:service.thaa_stress_requests_lr_tags)?  If your search period contains these three values, is the actual result 2602841.3333333335, i.e., (4115725 + 0 + 3692799) / 3?  Why do you want it to be different from the definition? Further more, what method do you use to reveal those three values?  Metrics index cannot be searched as index search.  mstats can only give you aggregations.  Even if you group by _timeseries, you still only get aggregations.  This returns to the fundamental question: What is the point of "removing values with '0'"?
The question is next to unanswerable.  First, what does "type" mean?  Is that a field name?  Attempting to show data using screenshot is bad enough, but the screenshots only includes one column.  Do ... See more...
The question is next to unanswerable.  First, what does "type" mean?  Is that a field name?  Attempting to show data using screenshot is bad enough, but the screenshots only includes one column.  Do we assume that there is another column named "type", and the values are "type1" and "type2"?  Pro tip #1: Illustrate data in text.  Tabulate with mock values if you want to anonymize. Second, what does "remove" mean?  Do you want to remove the row that contains null value in this field named "error message"?  As if using single-column screenshot is not confusing enough, your screenshot shows a heading "ErrorMessage" with so-called camel case and no space. Now, forget type because it doesn't seem to have any bearing in the question, just add to overall confusion. If you ask how to remove a row with null value in a given field such as one named "error message", all you need to do is to test it using isnull or isnotnull. | where isnotnull('error message') Is this what you ask?
Hi @vinod743374 , use eval: | eval ErrorMessage=if(ErrorMessage="Type1","Your message for Error Type 1",ErrorMessage) Ciao. Giuseppe