All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @MrJohn230, In the classic dashboard framework, changing the background color of the chart is something that is not possible.  In the dashboard studio, you can update the options parameter to ch... See more...
Hey @MrJohn230, In the classic dashboard framework, changing the background color of the chart is something that is not possible.  In the dashboard studio, you can update the options parameter to change the background color. Reference for Dashboard studio - single value visualization - https://docs.splunk.com/Documentation/Splunk/9.1.2/DashStudio/chartsSV#Single_value_2   For having a background color in classic dashboard, you would need to use charts and not the single value visualization - https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Viz/ChartConfigurationReference   --- If the above answer helps you, an upvote is appreciated.
Since you are referencing EventCode=4624 you are looking to use lack of login activity to determine if a system is inactive? If this is what you are trying to do I think this SPL may do it (provid... See more...
Since you are referencing EventCode=4624 you are looking to use lack of login activity to determine if a system is inactive? If this is what you are trying to do I think this SPL may do it (provided you have a static threshold to use for time since the last login from a user) index=<windows_index> sourcetype=WinEventLog signature_id="4624" | fields + _time, dest, signature_id, user, signature | stats values(signature) as signature, latest(_time) as last_login_epoch by dest, user | eval seconds_since_last_login=now()-'last_login_epoch', days_since_last_login=round(('seconds_since_last_login'/(60*60*24)), 2), duration_since_last_login=tostring(seconds_since_last_login, "duration") ``` user exclusion list ``` ``` if this list is large then storing results in a lookup or macro may make the most sense ``` ``` Example SPL for exclusion using lookup: | lookup windows_user_exclusion_list user OUTPUT user as exclusion_user | where isnull(exclusion_user) | fields - exclusion_user ``` ``` Example SPL for exclusion using hardcoded list of users: | search NOT user IN ("user_1", "user_2", "user_3", ..., "user_n") ``` | eventstats min(seconds_since_last_login) as latest_login_on_host_by_user_in_seconds by dest | eval last_login_user=if( 'seconds_since_last_login'=='latest_login_on_host_by_user_in_seconds', 'user', null() ) | stats max(last_login_epoch) as latest_login_epoch, min(latest_login_on_host_by_user_in_seconds) as latest_login_on_host_by_user_in_seconds, values(last_login_user) as last_login_user by dest | eval days_since_last_login=round(('latest_login_on_host_by_user_in_seconds'/(60*60*24)), 2), duration_since_last_login=tostring('latest_login_on_host_by_user_in_seconds', "duration") | convert ctime(latest_login_epoch) as latest_login_by_user_timestamp | fields dest, last_login_user, latest_login_by_user_timestamp, days_since_last_login, duration_since_last_login ``` This where clause can be tuned to desired threshold ``` | where 'days_since_last_login'>14 output will look something like this  
Hi @Rajkumar.Varma, Not sure if this is exactly what you are looking for, but wanted to share it in case it helps. https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-view-license-usage-... See more...
Hi @Rajkumar.Varma, Not sure if this is exactly what you are looking for, but wanted to share it in case it helps. https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-view-license-usage-on-the-Subscription-page/ta-p/32606
Thank you for clarifying! I'm new to tsidx files so I didn't know if they were meant to be read. I guess we haven't been collecting the logs from active directory specifically so we're working on tha... See more...
Thank you for clarifying! I'm new to tsidx files so I didn't know if they were meant to be read. I guess we haven't been collecting the logs from active directory specifically so we're working on that.
Have you try to switch "where" to "search" on your SPL?
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type":... See more...
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type": "Test", "result": "Pass", "logs": [{"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}  I want to be able to extract the test data (all key-value pairs) from each test. Ideally would like to create dashboard charts showing response from Motor and Fan tests among others.   Here is a sample search i have been using which allows me to create a table with the serial number, overall test result, individual test name, and individual test result index="factory_mtp_events" | search sourcetype="placeholder" source="placeholder" serial_number="PLACEHOLDER*"| spath logs{} output=logs| stats count by serial_number result logs| eval _raw=logs| spath test_name output=test_name |spath result output=test_result| table serial_number result test_name test_result How Can I index into the logs{} section and pull out all results dependent on test_name? So, how can I query for logs{}.test_name="Motor" and have the result yield : {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"},  
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user... See more...
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("test.aspx") | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today PFA screenshot for the results. Now i am trying to pass the pp_user_action_name value from the test.csv file and not getting any results  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ([| inputlookup test.csv]) | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today How to fix this?  thanks in advance. 
Hi I haven't check this for some time, but at least earlier that could be empty. See https://data-findings.com/wp-content/uploads/2023/04/M365-app-and-TAs-2023-03-15-sanitised.pdf page 13. r. Ismo
Wasnt getting results, so I went longer and smaller errors. Still not getting anything. Here is how I have it index=eits_wineventlog_security sourcetype=WinEventLog  EventCode=4771 OR EventCode=477... See more...
Wasnt getting results, so I went longer and smaller errors. Still not getting anything. Here is how I have it index=eits_wineventlog_security sourcetype=WinEventLog  EventCode=4771 OR EventCode=4776 | timechart span=60m count by user | where count>5 I know for sure we have had enough failures to at least get a few. Thanks  
Maybe not exactly what you are looking, but at least you could cleared out (set to 0) those events like basesearch earliest=-1d@d latest=now | eval takeIn = case (_time>=relative_time(now(),"@d") ,"... See more...
Maybe not exactly what you are looking, but at least you could cleared out (set to 0) those events like basesearch earliest=-1d@d latest=now | eval takeIn = case (_time>=relative_time(now(),"@d") ,"take", _time<=relative_time(now(), "-1d"), "take", true(), "drop") | where takeIn = "take" | timechart span=1h count | timewrap d series=short | fields _time s1 s0 | rename s1 as today, s0 as yesterday
Hi Why you cannot/don't want to use @glc_slash_it & @PickleRick answer? At least with test data it seems to work. You could test it like  <your basesearch OR | makeresults> | table Status, timeval,... See more...
Hi Why you cannot/don't want to use @glc_slash_it & @PickleRick answer? At least with test data it seems to work. You could test it like  <your basesearch OR | makeresults> | table Status, timeval, CompanyCode, CN | appendpipe [ stats count | eval error="thats not cool" | where count==0 | table error | fields - Status, timeval, CompanyCode, CN] | transpose 0 | eval allnulls=1 | foreach row* [ eval allnulls=if(isnull('<<FIELD>>'),allnulls,0) ] | where allnulls=0 | fields - allnulls | transpose 0 header_field=column | fields - column r. Ismo 
Thanks @isoutamo . How do we get the comparison between today vs yesterday with some time line. Currently I am getting yesterday whole day(24 hrs) but today midnight to upto now.    is it possi... See more...
Thanks @isoutamo . How do we get the comparison between today vs yesterday with some time line. Currently I am getting yesterday whole day(24 hrs) but today midnight to upto now.    is it possible for us to bring only today (midnight till now) vs same timeframe previous day in the chart? Current SPL: basesearch earliest=-1d@d latest=now | timechart span=1h count | timewrap d series=short | fields _time s1 s0 | rename s1 as today, s0 as yesterday
Thanks for the reply but my problem is little different my search has table command before using appendpipe for displaying scustom message , and now the problem is if table is empty it should displ... See more...
Thanks for the reply but my problem is little different my search has table command before using appendpipe for displaying scustom message , and now the problem is if table is empty it should display only custom message but it is showing empty table plus the message  like image below.   
Think I found a hacky way of doing this. Seems to recursive and should loop through all mvfield values, assigning each one its own unique field name. You can replicate this with this SPL.... See more...
Think I found a hacky way of doing this. Seems to recursive and should loop through all mvfield values, assigning each one its own unique field name. You can replicate this with this SPL. | makeresults | eval mv_field=split("a|b|c|d|e|f|aa", "|") ``` Below SPL is what loops through MV field and gives each entry its own unique fieldname ``` | eval iter=0, hacked_json=json_object() | foreach mode=multivalue mv_field [ | eval iter='iter'+1, hacked_json=json_set(hacked_json, "mv_field_".'iter', '<<ITEM>>') ] | spath input=hacked_json | fields - hacked_json, iter
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the adm... See more...
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the administrator asking him for the "Client Secret" and he gave me the information but he also asks for the "Cloud App Security Token" field and I really have no idea what information I should ask the administrator for. I would be grateful if you could explain me if it is possible. Thanks  
I came up with this in the middle of last year - perhaps you can adapt it to your purposes? Solved: Re: Mutlivalue Field Problem - Splunk Community
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce th... See more...
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce the number of notables? Additionally, I've provided the bytes_out data for the last 24 hrs. Please set the threshold based on that data. | tstats `summariesonly` count values(sourcetype) AS sourcetype, values(All_Traffic.src_zone) AS src_zone, earliest(_time) as earliest, latest(_time) as latest, values(All_Traffic.action) AS action. values(All_Traffic.bytes_out) AS bytes_out, values(All_Traffic.bytes_in) AS bytes_in, sum(All_Traffic.bytes) AS bytes, values(All_Traffic.direction) AS direction, values(All_Traffic.app) AS app, from datamodel=Network_Traffic ("bytes_out" 163 594 594 594 594 294 686 215 392 392 98 954 215 86 424 900 530 594 594 117 294 882 148 258 320 594 516 142 215 159 215 86 98 98 369 401 159 215 215 594 212 215 220 585 203 594 680 212 159 159 159 159 159 718 159 159 159 159 594 221 146 318 318 159 159 318 318 318 318 159 159 159 159 159 159 636 318 159 159 159 159 159 159 159 159 159 159 159 159 159 159 318 159 318 318 318 318 326 159 159 753 159 326 657 912 159 318 159 159 159 159 159 318 148 148 814 594 320 159 159 159 159 159 159 159 159 159 318 318 159 795 318 318 159 159 565 870 159 321 912 318 318 508 159 159 567 487 159 836 507 159 159 318 477 318 318 159 159 318 318 318 477 246 155 594 594 594 594 594 594 99 159 159 222 241 159 438 565 400 159 159 159 318 795 148 119 667 159 479 486 477 477 406 828 477 222 222 148 753 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 594 784 323 594 318 159 388 318 318 711 318 388 159 159 159 159 350 350 318 318 560 318 318 719 318 646 620 159 801 159 620 159 779 318 912 318 318 318 318 318 318 323 641 810 318 318 318 323 620 318 620 318 870 159 159 159 620 461 318 318 779 318 870 159 870 323 388 318 318 870 318 350 832 318 159 318 318 810 318 159 318 318 318 318 318 733 318 323 323 323 651 159 159 318 318 318 318 318 318 159 159 159 159 159 159 159 159 159 159 159 159 318 318 318 318 159 159 159 159 159 159 159 159 318 159 159 159 159 159 159 159 318 159 319 318 318 665 935 356 574 197 197 201 159 477 477 963 477 486 159 318 159 594 155 824 400 350 318 477 222 159 222 296 518 666 318 477 171 318 318 159 159 159 159 155 318 318 318 318 477 159 159 159 159 318 318 159 318 159 159 318 722 318 318 439 549 328 477 159 318 964 603 318 318 159 159 196 370 148 753 159 159 569 159 765 477 594 370 370 318 318 636 318 466 587 428 444 159 148 148 159 159 159 159 159 159 159 159 159 159 159 159 159 753 594 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 477 159 758 326 979 159 159 318 318 318 318 318 594 318 318 159 318 159 318 159 159 159 159 159 159 159 159 318 318 318 318 159 159 636 159 159 679 159 753 667 318 318 318 159 159 159 159 753 331 331 318 159 649 159 353 353 159 159 512 159 326 955 159 753 159 326 326 159 159 912 753 159 159 594 325 325 318 318 912 159 318 159 318 326 159 159 753 159 326 924 318 943 159 665 159 594 594 400 159 159 159 159 159 159 159 159 159 159 159 159 159 159 908 222 439 525 318 159 603 159 159 148 222 318 318 728 318 318 159 159 159 159 155 155)
Yes I have 4.2 currently. I haven't used that plugin. I'll take a gander at it. Thank you
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) Howev... See more...
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) However, the length might be >2 and I would like to have a generic solution to do this. I know I can create a MV field with an index and use mvexpand and then stats to get all back into a single event, but I run into memory issues with this in my own data.    In short: not use mvexpand and solve the issue in a generic fashion.      
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt direct... See more...
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt directory. Sometimes it occurs every few seconds, while other times it will log hundreds of times per second. So far there are only a handful of servers experiencing the problem, and we have many others on running the same version and OS. 09-17-2023 20:33:50.029 +0000 ERROR BTreeCP [2386469 TcpOutEloop] - failed: failed to mkdir /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp: File exists Doing a restart of the splunkforwarder service mitigates the problem temporarily, but the error occurs again within a few days. When the error messages come in, the directory already exists and contains files: # ls /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp/ btree_index.dat btree_records.dat We are not sure what causes the issue or how to reproduce it.