All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer Below is the search I am using in a panel |`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" |s... See more...
@ITWhisperer Below is the search I am using in a panel |`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location Screenshot:  
Try opening the panel search in a search window and see what your searches are
@ITWhisperer  Summary indexing is giving the results for 30 days but results are not populating the dashboard. No results populating in a dashboards when search for 30 days.  
As richgalloway said, you need 2 separate alerts for 2 separate cron schedules. To make this maintainable, you could make a single Saved Search, then make 2 separate alerts that reference the single ... See more...
As richgalloway said, you need 2 separate alerts for 2 separate cron schedules. To make this maintainable, you could make a single Saved Search, then make 2 separate alerts that reference the single Saved Search using the | savedsearch  (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Savedsearch)  Each alert will have a cron schedule: 1) 4 times a day starting from 12am, 6am, 12pm, 6 pm (weekends - Sat and Sun) 0 */6 * * 0,6 2) only at 6AM on weekdays (Mon-Fri) 0 6 * * 1-5 For formulating cron schedules, I recommend using the website https://crontab.guru/ as it makes a human-readable schedule at the top.
What is the issue? ("not getting proper results" and "not populating results properly" do not really explain what is wrong.)
Hi @pubuduhashan , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
  To obtain the results in a dashboard I am using following things. 1.) First I created datamodel 2.) Datamodel I have used in macros which is running 1h and 1d basis. 3. pass those macros in saved ... See more...
  To obtain the results in a dashboard I am using following things. 1.) First I created datamodel 2.) Datamodel I have used in macros which is running 1h and 1d basis. 3. pass those macros in saved search and collect the results in hourly and daily basis. 4. Results of the span_token is passing to macro from the below dashboard code. 5. As I am attaching macros and saved searches at the end of the dashboard code. Issue : I am not getting proper results by using this approach and dashboard is not populating results properly. I need gidance to fix the issue. ==================================================================== <form version="1.1" theme="light"> <label>Throughput : Highbay</label> <init> <set token="span_token">$form.span_token$</set> </init> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="time" token="time" id="my_date_range" searchWhenChanged="true"> <label>Select the Time Range</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum(strptime('earliest', "%s")),'earliest',relative_time(now(),'earliest')))</eval> <eval token="time.latest_epoch">if(isnum(strptime('latest', "%s")),'latest',relative_time(now(),'latest'))</eval> <eval token="macro_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "throughput_macro_summary_1d",if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "throughput_macro_summary_1h","throughput_macro_raw"))</eval> <eval token="form.span_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "d", if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "h", $form.span_token$))</eval> </change> </input> </panel> </row> <row> <panel> <chart> <title>Total Pallet</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <chart> <title>Pallet IN</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <chart> <title>Pallet OUT</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Exit*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form> ======================================= Macros: throughput_macro_raw(1) datamodel Walmart_throughput Highbay_throughput flat | bin _time span="$span_token$" | rename AsrTsuEventTrackingUpdate.LocationQualifiedName as LocationQualifiedName | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year throughput_macro_summary_1d(1) search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="$span_token$" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count throughput_macro_summary_1h(1) search index="tput_summary" sourcetype="tput_summary_1h" | bin _time span=$span_token$ | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count saved searches: throughput_summary_index_1d | `throughput_macro_raw(span_token="1d")` |strcat "raw" "," location group_name |strcat "raw" "," location group_name | stats count by location _time LocationQualifiedName | collect index="tput_summary" sourcetype="tput_summary_1d" throughput_summary_index_1h | `throughput_macro_raw(span_token="1h")` |strcat "raw" "," location group_name | stats count by location _time LocationQualifiedName | collect index="tput_summary" sourcetype="tput_summary_1h"  
Thanks for the prompt response! 
Thank you for introducing msearch aka mpreview.  As I mentioned before, mstats doesn't allow filtering by value.  So, you need to take care of stats after mpreview.  Something like | mpreview index=... See more...
Thank you for introducing msearch aka mpreview.  As I mentioned before, mstats doesn't allow filtering by value.  So, you need to take care of stats after mpreview.  Something like | mpreview index=itsi_im_metrics | search Dimension.id IN ("*Process.aspx") calc:service.thaa_stress_requests_count_lr_tags>0 calc:service.thaa_stress_requests_lr_tags > 0 | stats sum(calc:service.thaa_stress_requests_count_lr_tags) As "Count" , avg(calc:service.thaa_stress_requests_lr_tags) As "Response" by Dimension.id | eval Response=round((Response/1000000),2), Count=round(Count,0)
There are a couple of ways to get the desired field from the ID. | rex field=ID "-(?<Delimited_ID>[^-]+)" ``` OR ``` | eval tmp = split(ID, "-") | eval Delimited_ID = mvindex(tmp,1) Use the new fie... See more...
There are a couple of ways to get the desired field from the ID. | rex field=ID "-(?<Delimited_ID>[^-]+)" ``` OR ``` | eval tmp = split(ID, "-") | eval Delimited_ID = mvindex(tmp,1) Use the new field in a stats command just as you would any other field. | stats count as Count by Delimited_ID, HTTP_responsecode  
To have different cron schedules you have to clone the alert and set a separate schedule for each copy.
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats comman... See more...
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats command with the delimited value by HTTP_responsecode Something as below  Delimited_ID HTTP_responsecode Count YYYY 200 10   Please could you help on how to delimit the value in the above format mentioned and how to use the new delimited value in a stats command 
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head. ... See more...
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head.      Thanks in advance! 
Hi @richgalloway , thank you for that. i have one more question, can u pls help on this I want a cron where alert should trigger  4 times a day starting from 12am, 6am, 12pm, 6 pm and weekday only ... See more...
Hi @richgalloway , thank you for that. i have one more question, can u pls help on this I want a cron where alert should trigger  4 times a day starting from 12am, 6am, 12pm, 6 pm and weekday only at 6am everyday
You can specify the exact hours you want the alert to run. 0 0,6,12 * * *
We have an alert where the cron schedule runs for every 6hours 0 */6 * * * but I don’t want to receive the alert at 6pm only how can I write a corn for that???
I don't have experience with that particular app but in theory it should work. Give it a try!
Can you troubleshoot that Splunk is applying the props and transforms to the logs?    E.g. what do your inputs.conf and props.conf stanzas look like for this log type, and on which Splunk machines ... See more...
Can you troubleshoot that Splunk is applying the props and transforms to the logs?    E.g. what do your inputs.conf and props.conf stanzas look like for this log type, and on which Splunk machines are the inputs.conf and props.conf files placed
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm l... See more...
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm launching my FastAPI app with the following command to include the AppDynamics agent: pyagent run -c appdynamics.cfg uvicorn my_app:app --reload My goal is to reduce the verbosity of the logs from both the AppDynamics agent and the proxy that are output to stdout, aiming to keep my console output clean and focused on more critical issues. My module versions: $ pip freeze | grep appdy appdynamics==23.10.0.6327 appdynamics-bindeps-linux-x64==23.10.0 appdynamics-proxysupport-linux-x64==11.64.3 Here's the content of my `appdynamics.cfg` configuration file: [agent] app = my-app tier = my-tier node = teste-local-01 [controller] host = my-controller.saas.appdynamics.com port = 443 ssl = true account = my-account accesskey = my-key [log] level = warning debugging = off I attempted to decrease the log verbosity further by modifying the `log4j.xml` file for the proxy to set the logging level to WARNING. However, this change didn't have the effect I was hoping for. The `log4j.xml` file I adjusted is located at: /tmp/appd/lib/cp311-cp311-63ff661bc175896c1717899ca23edc8f5fa87629d9e3bcd02cf4303ea4836f9f/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j.xml Here are the adjustments I made to the `log4j.xml`: <appender class="com.singularity.util.org.apache.log4j.ConsoleAppender" name="ConsoleAppender"> <layout class="com.singularity.util.org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p [%t] %c{1} - %m%n" /> </layout> <filter class="com.singularity.util.org.apache.log4j.varia.LevelRangeFilter"> <param name="LevelMax" value="FATAL" /> <param name="LevelMin" value="WARNING" /> </filter> Despite these efforts, I'm still seeing a high volume of logs from both the agent and proxy. Could anyone provide guidance or suggestions on how to effectively lower the log output to stdout for both the AppDynamics Python Agent and its proxy? Any tips on ensuring my changes to `log4j.xml` are correctly applied would also be greatly appreciated. Thank you in advance for your help! Example of logging messages I would like to remove from my stdout: 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 ... [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:51 BRT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf] ... 11:15:52,167 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Starting BT Logs at Sat Mar 23 11:15:52 BRT 2024 11:15:52,168 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - ########################################################### 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Using Proxy Version [Python Agent v23.10.0.6327 (proxy v23.10.0.35234) compatible with 4.5.0.21130 Python Version 3.11.6] 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] JavaAgent - Logging set up for log4j2 ... 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] JDBCConfiguration - Setting normalizePreparedStatements to true 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] CallGraphConfigHandler - Call Graph Config Changed callgraph-granularity-in-ms Value -null
Thanks @bowesmana @ITWhisperer . I was able to concoct a solution based on our inputs. @bowesmanaIn your below solution count=1 can also denote if the value was present in the live search and no... See more...
Thanks @bowesmana @ITWhisperer . I was able to concoct a solution based on our inputs. @bowesmanaIn your below solution count=1 can also denote if the value was present in the live search and not in the inputlookup. However, your solution of using append works in this case. | stats count by Time Value | append [ | inputlookup lookup.csv ``` Filter the entries you expect here, e.g. using addinfo ``` ``` | where Time is in the range you want ``` ] | stats count by Time Value | where count=1 @ITWhispererYour solution of using a variable Flag worked for me as it also handled the scenario where a particular value was found only in the live index search but not on the lookup. Thanks for this. <your index> | bin _time as period_start span=1h | dedup period_start Value | eval flag = 1 | append [| inputlookup lookup.csv [ eval period_start = ``` convert your time period here ``` | eval flag = 2] | stats sum(flag) as flag by period_start Value ``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ``` | where flag = 2