All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

1.Problem description The current production environment has encountered incomplete data returned by using the query map function. We have created dashboards and queries separately. But the fundam... See more...
1.Problem description The current production environment has encountered incomplete data returned by using the query map function. We have created dashboards and queries separately. But the fundamental reason is that there is also a problem with the dashboard query results, as it involves the map function. I hope it can help solve this problem!   Core requirements: Continuous tracking of sensitive information behavior alerts Requirement: 1) When audit records that match the behavior characteristics of continuously tracking sensitive information are found, alarm information will be displayed on the homepage. Clicking on the alarm information will show the relevant alarm details 2) Auditors can set query values through the interface, and the query values will be used as default values for the next opening (parameters such as cycle and trigger days). They will be sorted by daily granularity (for queries with the same conditions, they will be calculated once a day). Once the cumulative trigger value is reached over time, an alarm will be triggered. Filter criteria: 'Start Date' (manually entered, plugin, default is the beginning of this month), 'End Date' (manually entered, plugin, default is the current date), 'Name' (drop-down box, can be entered), 'Department' (drop-down box, can be entered), 'Period' (input box), 'Trigger Days' (input box) For example, if the start date is T0, the end date is TD, the cycle is N days, and the trigger days are M days, the system should calculate whether each user continuously accesses the same sensitive account more than M times from T0 to T0+N days, and then calculate the number of visits from T1 to T0+1+N days, T0+2 to T0+2+N days... T0+D to T0+D+N days (each user accessing the same sensitive account multiple times a day is recorded as 1 time, and not accumulated among different users). During this period, for each continuous visit that exceeds M times, a detail will be displayed. Click on the specific number of visits to see the details of each visit record. 2.The sample data is The data fields are: "date","number","name","department","status" 2020-01-01 00:05:10 "," 00010872 "," Baoda "," Technical Department "," Yes " Sample data generation script: #!/bin/bash #Define Name List first_names=(" Zhao Qian Sun Li Zhou Wu Zheng Wang Feng Chen Chu Wei Jiang Shen Han Yang Zhu Qin You Xu He Lv Shi Zhang Kong Cao Yan Hua Jin Wei Tao Jiang Qi Xie Zou Yu Bai Shui Dou Zhang Yun Su Pan Ge Xi Fan Peng Lang Lu Wei Chang Ma Miao Feng Hua Fang Yu Ren Yuan Liu Feng Bao Shi Tang Fei Lian Cen Xue Lei He Ni Tang Teng Yin Luo Bi Hao Wu An Chang Le Shi Fu Pi Ka Qi Kang Wu Yu Yuan Bu Gu Meng Ping Huang and so on. Mu Xiao Yin Yao Shao Zhan Wang Qi Mao Yu Di Mi Bei Ming Zang Ji Fu Cheng Dai Tan Song Mao Pang Xiong Ji Shu Qu Xiang Congratulations to Dong Liang Du Ruan Lan Min Xi Ji Ma Qiang Jia Lu Lou Wei Jiang Tong Yan Guo Mei Sheng Lin Diao Zhong Xu Qiu Luo Gao Xia Cai Tian Fan Hu Ling Huo Yu Wan Zhi Ke Yang Guan Lu Mo Jing Fang Qiu Miao. Gan " Jie" Ying " Zong" Ding " Xuan" Ben " Deng" Yu " Dan" Hang " Hong" Bao " Zhu" Zuo " Shi" Cui " Ji" Niu " Gong" Cheng " Ji" Xing " Xie" Pei " Lu" Rong " Weng" Xun " Yang" Hui " Zhen" Qu " Jia" Feng "Rui" Yi " Chu" Jin " Ji" Yong " Mi" Song " Jing" Section "Fu Wu Jiao Ba Gong Mu Kui Shan Gu Che Hou Mi Peng Quan Xi Ban Yang Qiu Zhong Yi Gong Ning Qiu Luan Bao Gan Zhen Li Rong Zu Wu Fu Liu Jing Zhan Shu Long Ye Xing Si Shao Gao Li Ji Bo Yin Shen Party Zhai Tan Gong Lao Ji Shen Fu Du Ran Zai Li Yong Qu Sang Gui Pu Niu Shou Tong Bian Hu Yan Ji Jia Pu Shang Nong Wen Bie Zhuang Yan Chai Qu Yan Chong Mu Lian Ru Xi Huan Ai Yu Rong Xiang Gu Yi Shen Ge Liao Yu Zhong Ju Heng Bu Du Geng Man Hong Kuang Guo Wen Kou Guang Lu Que Dong Wu Wei Yue Kui Long Shi Gong... Nie Chao Gou Ao Rong "" Leng "" Xu "" Xin "" Kan "" Na "" Jian "" Rao "" Kong "" Zeng "" Wu "" Sha "" Ni "" Yang "" Ju "" Xu "" Feng "" Chao "" Guan "" Kuai "" Xiang "" Cha "" After "Jing" "Hong" "You" "Zhu" "Quan" "Lu" "Gai" "Yi" "Huan" "Gong" "Wan Wei" "Sima" "Shangguan" "Ouyang" "Xiahou" "Zhuge" "Wen Ren" "Dongfang" "Helian" "Huangfu" Weichi Gongyang Dantai Gongye Zongzheng Puyang Chunyu Chanyu Taishu Shentu Gongsun Zhongsun Xuanyuan Linghu Zhongli Yuwen Changsun Murong Xianyu Luqiu Situ Sikong Qiguan Sikou Zongdu Ziche Zhuansun Duanmu Wuma Gongxi Lacquer Carving Lezheng Rangsi Gongliang Tuoba Jia Gu Zaifu Gu Liang Jin Chu Yan Fa Ru Yan Tu Qin Duan Gan Bai Li Dong Guo Nan Men Hu Yan Gui Hai Yang Zhi Wei Sheng Yue " Shuai" Gou " Kang" Kuang " Kuang" Hou "" You "" Qin "" Liang Qiu "" Zuo Qiu "" East Gate "" West Gate "" Shang Mou "" She "" Xie "" Bo Shang "" Nan Gong "" Mo Ha "" Qiao "" Qiao "" Nian "" Ai "" Yang "" Tong "" Fifth "" Yan Fu " last_names=(" Wei Gang Yong Yi Jun Feng Qiang Jun Ping Bao Dong Wen Hui Li Ming Yong Jian Shi Guang Zhi Yi Xing Liang Hai Shan Ren Bo Ning Gui Fu Sheng Long Yuan Quan Guo Sheng Xue Xiang Cai Fa Wu Xin Li Qing Fei Bin Fu Shun Xin Zi. Jie Tao Chang Cheng Kang Xing Guang Tian Da An Yan Zhong Mao Jin Lin You have "Jian" "Biao" "Bo" "Cheng" "Hui" "Si" "Qun" "Hao" "Xin" "Bang" "Cheng" "Le" "Shao" "Gong" "Song" "Shan" "Hou" "Qing" "Lei" "Min" "You" "Yu" "He" "Zhe" "Jiang" "Chao" "Hao". Liang Zheng Qian Heng Qi Gu Lun Xiang Xu Peng etc Ze "" Chen "" Chen "" Shi "" To "Build" "Home" "To" Tree "" Yan "" De "" Xing "" Time "" Tai "" Sheng "" Xiong "" Chen "" Jun "" Guan "" Ce "Teng" "Nan" "Rong" "Feng" "Hang" "Hong" Departments=("Logistics Department" "Finance Department" "Technology Department" "Marketing Department" "Sales Department" "Human Resources Department") #Timestamp of January 1st, 2020 start_date=$(date -d "2020-01-01" +%s) #Current timestamp, initialized to the starting time current_date=$start_date #Used to record the previous number prev_number="" #Used to record the previous name prev_name="" #Used to record the previous department prev_department="" #Used to control whether to generate new numbers generate_new_number=true #Used to record the start time of continuous access start_visit_time=0 #Output CSV header echo "\"date\",\"number\",\"name\",\"department\",\"status\"" while true; do #Generate random increments between 0 and 86400 seconds (random seconds within a day) random_seconds=$((RANDOM % 86400)) current_date=$((current_date + random_seconds)) #Check if it exceeds the current time if [ $current_date -gt $(date +%s) ]; then break fi #The probability of generating a new combination is 1/5 if [ $((RANDOM % 5)) -eq 0 ]; then generate_new_number=true fi if [ $generate_new_number = true ]; then #Generate an 8-bit random number number=$(printf "%08d" $((RANDOM % 100000000))) prev_number=$number #Randomly select a name first_name=$(echo ${first_names[RANDOM % ${#first_names[@]}]}) last_name=$(echo ${last_names[RANDOM % ${#last_names[@]}]}) prev_name="$first_name$last_name" #Randomly select departments prev_department=$(echo ${departments[RANDOM % ${#departments[@]}]}) generate_new_number=false start_visit_time=$current_date else number=$prev_number if [ $((current_date - start_visit_time)) -gt $((7 * 86400)) ]; then generate_new_number=true continue fi fi #Convert timestamp to date time format full_date=$(date -d @$current_date +%Y-%m-%d\ %H:%M:%S) #Fixed status is yes Yes_no="Yes" #Output CSV data rows echo "\"$full_date\",\"$number\",\"$prev_name\",\"$prev_department\",\"$yes_no\"" done 3..Data query SPL | makeresults | addinfo | table info_min_time info_max_time earliest_time latest_time | join [ search index=edw sourcetype=csv status="Yes" | stats earliest(_time) as earliest_time latest(_time) as latest_time ] | eval searchEarliestTime=if(info_min_time == "0.000",earliest_time,info_min_time) | eval searchLatestTime=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime, searchLatestTime, "1d") | mvexpand start | eval start=strftime(start, "%F 00:00:00") | eval start=strptime(start, "%F %T") | eval start=round(start) | eval end=relative_time(start,"+7d") | where end < searchLatestTime | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | map search="search earliest=$start$ latest=$end$ index=edw sourcetype=csv status="Yes" | bin _time span=1d | stats dc(_time) as "cishu" by _time day name department number | eval a=$a$ | eval b=$b$ | stats sum(cishu) as count,values(day) as "Query date" by a b name number department " maxsearches=500000 | where count > 2 instrument panel xml <form version="1.1"> <label>Homepage Warning Information Display (Verification)</label> <description></description> <search> <query>| makeresults | addinfo | table info_min_time info_max_time earliest_time latest_time | join [ search index=edw sourcetype=csv status="Yes" | stats earliest(_time) as earliest_time latest(_time) as latest_time ] </query> <earliest>$query_date.earliest$</earliest> <latest>$query_date.latest$</latest> <progress> <unset token="generate_time1"></unset> </progress> <finalized> <eval token="searchEarliestTime1">if($result.info_min_time$ == "0.000",$result.earliest_time$,$result.info_min_time$)</eval> <eval token="searchLatestTime1">if($result.info_max_time$="+Infinity", relative_time($result.latest_time$,"+1d"), $result.info_max_time$)</eval> <set token="generate_time1">true</set> </finalized> </search> <label>Homepage Warning Information Display (Verification)</label> <search> <query>| makeresults | addinfo | table info_min_time info_max_time earliest_time latest_time | join [ search index=edw sourcetype=csv status="Yes" | stats earliest(_time) as earliest_time latest(_time) as latest_time ] </query> <earliest>$query_date2.earliest$</earliest> <latest>$query_date2.latest$</latest> <progress> <unset token="generate_time2"></unset> </progress> <finalized> <eval token="searchEarliestTime2">if($result.info_min_time$ == "0.000",$result.earliest_time$,$result.info_min_time$)</eval> <eval token="searchLatestTime2">if($result.info_max_time$="+Infinity", relative_time($result.latest_time$,"+1d"), $result.info_max_time$)</eval> <set token="generate_time2">true</set> </finalized> </search> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime1$, $searchLatestTime1$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval$d") | where end &lt;=$searchLatestTime1$ | stats count</query> <progress> <unset token="error_input1"></unset> <unset token="display_info1"></unset> </progress> <finalized> <condition match="$result.count$ &gt; 0"> <eval token="display_info1">$result.count$</eval> </condition> <condition> <eval token="error_input1">$result.count$</eval> </condition> </finalized> </search> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime2$, $searchLatestTime2$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval_2$d") | where end &lt;=$searchLatestTime2$ | stats count</query> <progress> <unset token="error_input2"></unset> <unset token="display_info2"></unset> </progress> <finalized> <condition match="$result.count$ &gt; 0"> <eval token="display_info2">$result.count$</eval> </condition> <condition> <eval token="error_input2">$result.count$</eval> </condition> </finalized> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <title>number of events</title> <single> <search> <query>index=edw sourcetype=csv status="Yes" | bin _time span=1d | stats count by _time | stats sum(count) as 次数</query> <earliest>$query_date.earliest$</earliest> <latest>$query_date.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="height">128</option> <option name="rangeColors">["0x53a051","0x53a051"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel depends="$generate_time1$"> <title>Sensitive account access alert</title> <input type="time" searchWhenChanged="true" token="query_date"> <label>Query date</label> <default> <earliest>@mon</earliest> <latest>now</latest> </default> </input> <input type="text" searchWhenChanged="true" token="time_interval"> <label>cycle</label> <default>7</default> </input> <input type="text" searchWhenChanged="true" token="trigger"> <label>Trigger Value</label> <default>2</default> </input> </panel> </row> <row> <panel depends="$error_input1$"> <html> <code>The query interval cannot be generated, please check the input</code> </html> </panel> </row> <row> <panel depends="$display_info1$"> <single> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime1$, $searchLatestTime1$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval$d") | where end &lt;=$searchLatestTime1$ | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | map search="search earliest=\"$$start$$\" latest=\"$$end$$\" index=edw sourcetype=csv status="Yes" | stats count by name number department " maxsearches=500000 | where count &gt; $trigger$ | stats count</query> <earliest>$query_date.earliest$</earliest> <latest>$query_date.latest$</latest> </search> <option name="afterLabel">Example of Sensitive Account Access Alert Event</option> <option name="beforeLabel">Found that</option> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="linkView">search</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x65a637","0xd93f3c"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <drilldown> <link target="_blank">search?q=%7C%20makeresults%20%0A%7C%20eval%20start%3Dmvrange($searchEarliestTime1$%2C%20$searchLatestTime1$%2C%20%221d%22)%20%0A%7C%20mvexpand%20start%20%0A%7C%20eval%20end%3Drelative_time(start%2C%22%2B$time_interval$d%22)%20%0A%7C%20where%20end%20%3C%3D$searchLatestTime1$%20%0A%7C%20eval%20a%3Dstrftime(start%2C%20%22%25F%22)%20%0A%7C%20eval%20b%3Dstrftime(end%2C%20%22%25F%22)%20%0A%7C%20fields%20start%20a%20end%20b%20%0A%7C%20map%20search%3D%22search%20earliest%3D%5C%22$$start$$%5C%22%20latest%3D%5C%22$$end$$%5C%22%20index%3Dedw%20sourcetype%3Dcsv%20status%3D%22%E6%98%AF%22%20%20%7C%20stats%20count%20%20by%20name%20number%20department%20%7C%20where%20count%20%3E$trigger$%20%20%20%22%20maxsearches%3D500000&amp;earliest=$query_date.earliest$&amp;latest=$query_date.latest$</link> </drilldown> </single> </panel> </row> <row> <panel> <table> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime1$, $searchLatestTime1$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval$d") | where end &lt;=$searchLatestTime1$ | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | map search="search earliest=\"$$start$$\" latest=\"$$end$$\" index=edw sourcetype=csv status="Yes" | stats count by name number department " maxsearches=500000 | where count &gt;$trigger$</query> <earliest>$query_date.earliest$</earliest> <latest>$query_date.latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel depends="$generate_time2$"> <title>Continuous tracking of sensitive information behavior alerts</title> <input type="time" searchWhenChanged="true" token="query_date2"> <label>Query date</label> <default> <earliest>@mon</earliest> <latest>now</latest> </default> </input> <input type="text" searchWhenChanged="true" token="time_interval_2"> <label>cycle</label> <default>7</default> </input> <input type="text" searchWhenChanged="true" token="trigger2"> <label>Trigger Value</label> <default>2</default> </input> </panel> </row> <row> <panel depends="$error_input2$"> <html> <code>The query interval cannot be generated, please check the input</code> </html> </panel> </row> <row> <panel depends="$display_info2$"> <single> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime2$, $searchLatestTime2$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval_2$d") | eval alert_date=relative_time(end,"+1d") | where end &lt;=$searchLatestTime2$ | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date,"%F") | fields start a end b c | map search="search earliest=\"$$start$$\" latest=\"$$end$$\" index=edw sourcetype=csv status="Yes" | bin _time span=1d | stats dc(_time) as "cishu" by _time day name department number | eval a=$$a$$ | eval b=$$b$$ | eval c=$$c$$ | stats sum(cishu) as count,values(day) as "Query date" by a b c name number department " maxsearches=500000 | where count &gt; $trigger2$ | stats count</query> <earliest>$query_date2.earliest$</earliest> <latest>$query_date2.latest$</latest> </search> <option name="afterLabel">Example "User Continuous Tracking Behavior" Event</option> <option name="beforeLabel">Found that</option> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="linkView">search</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x65a637","0xd93f3c"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <drilldown> <link target="_blank">search?q=%7C%20makeresults%20%0A%7C%20eval%20start%3Dmvrange($searchEarliestTime2$%2C%20$searchLatestTime2$%2C%20%221d%22)%20%0A%7C%20mvexpand%20start%20%0A%7C%20eval%20end%3Drelative_time(start%2C%22%2B$time_interval_2$d%22)%20%0A%7C%20eval%20alert_date%3Drelative_time(end%2C%22%2B1d%22)%0A%7C%20where%20end%20%3C%3D$searchLatestTime2$%20%0A%7C%20eval%20a%3Dstrftime(start%2C%20%22%25F%22)%20%0A%7C%20eval%20b%3Dstrftime(end%2C%20%22%25F%22)%20%0A%7C%20eval%20c%3Dstrftime(alert_date%2C%22%25F%22)%20%0A%7C%20fields%20start%20a%20end%20b%20c%0A%7C%20map%20search%3D%22search%20earliest%3D%5C%22$$start$$%5C%22%20latest%3D%5C%22$$end$$%5C%22%20%20%0Aindex%3Dedw%20sourcetype%3Dcsv%20status%3D%22%E6%98%AF%22%20%20%20%7C%20bin%20_time%20span%3D1d%20%20%7C%20stats%20dc(_time)%20as%20%22%E8%AE%BF%E9%97%AE%E6%95%8F%E6%84%9F%E8%B4%A6%E6%88%B7%E6%AC%A1%E6%95%B0%22%20by%20%20_time%20day%20name%20department%20number%0A%20%20%20%20%7C%20eval%20a%3D$$a$$%20%20%7C%20eval%20b%3D$$b$$%20%7C%20eval%20c%3D$$c$$%0A%7C%20stats%20sum(%E8%AE%BF%E9%97%AE%E6%95%8F%E6%84%9F%E8%B4%A6%E6%88%B7%E6%AC%A1%E6%95%B0)%20as%20count%2Cvalues(day)%20as%20%22%E6%9F%A5%E8%AF%A2%E6%97%A5%E6%9C%9F%22%20by%20a%20b%20c%20name%20number%20department%0A%22%20maxsearches%3D500000%20%0A%7C%20where%20count%20%3E%20$trigger2$&amp;earliest=$query_date.earliest$&amp;latest=$query_date.latest$</link> </drilldown> </single> </panel> </row> <row> <panel> <table> <search> <query>| makeresults | eval start=mvrange($searchEarliestTime2$, $searchLatestTime2$, "1d") | mvexpand start | eval end=relative_time(start,"+$time_interval_2$d") | eval alert_date=relative_time(end,"+1d") | where end &lt;=$searchLatestTime2$ | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date,"%F") | fields start a end b c | map search="search earliest=\"$$start$$\" latest=\"$$end$$\" index=edw sourcetype=csv status="Yes" | bin _time span=1d | stats dc(_time) as "cishu" by _time day name department number | eval a=$$a$$ | eval b=$$b$$ | eval c=$$c$$ | stats sum(cishu) as count,values(day) as "Query date" by a b c name number department " maxsearches=500000 | where count &gt; $trigger2$</query> <earliest>$query_date2.earliest$</earliest> <latest>$query_date2.latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> 4.Problem reproduction   There is a problem of data loss in cross year queries, such as querying 23-12-1 24-1-1, as shown in the following figure:   For example: 12/01/2023-06/30/2024   Creating dashboards and query statements will result in more results than temporary queries Scenario 1: Query on 12/01/2023-06/30/2024 Result: There were 12 cases on the dashboard, 2 more than temporary queries Scenario 2: Query Example: 12/01/2023-06/30/2024 187 dashboard cases, 1 more than temporary queries          
Hi all, I am deploying the Hipster Store application as part of the Guided Onboarding in the Splunk Observability Cloud. However, many of the pods are stuck in `CrashLoopBackOff` status. Environmen... See more...
Hi all, I am deploying the Hipster Store application as part of the Guided Onboarding in the Splunk Observability Cloud. However, many of the pods are stuck in `CrashLoopBackOff` status. Environment Details - Platform: Minikube   - Minikube Version: v1.34.0 (running on macOS 14.5, arm64)   - Docker Version:    - Client: v27.4.0     - Server: v27.2.0 (Community Edition)   - Helm Version: version.BuildInfo{Version:"v3.16.4", GitCommit:"7877b45b63f95635153b29a42c0c2f4273ec45ca", GitTreeState:"clean", GoVersion:"go1.22.7"} Command Used for Deployment helm install my-sixth-splunk-o11y-otel-collector --set="splunkObservability.realm=us1,splunkObservability.accessToken=XXX_XXXXXXXXXXXXXXXXXX,clusterName=my-sixth-otel-collector-cluster" splunk-otel-collector-chart/splunk-otel-collector Error Observed  Pods remain in the `CrashLoopBackOff` state. Below is the output of `kubectl get pods`: NAME                                                              READY   STATUS             RESTARTS        AGE adservice-cbbc87864-fbt7d                                         0/1     Running            3 (45s ago)     7m51s cartservice-797fcdd44b-67ctq                                      0/1     CrashLoopBackOff   6 (97s ago)     7m52s checkoutservice-7c5955d5b9-gksms                                  0/1     CrashLoopBackOff   6 (2m11s ago)   7m52s ... (other entries truncated for brevity) Logs from Affected Pods 1. `checkoutservice` Logs:    {"message":"failed to start profiler: project ID must be specified in the configuration if running outside of GCP","severity":"warning","timestamp":"2025-01-06T22:58:09.648950291Z"}    {"message":"sleeping 10s to retry initializing Stackdriver profiler","severity":"info"} 2. `productcatalogservice` Logs:    {"message":"failed to start profiler: project ID must be specified in the configuration if running outside of GCP","severity":"warning","timestamp":"2025-01-06T22:59:15.143462044Z"}    {"message":"sleeping 20s to retry initializing Stackdriver profiler","severity":"info"} 3. `shippingservice` Logs:    {"message":"failed to start profiler: project ID must be specified in the configuration if running outside of GCP","severity":"warning","timestamp":"2025-01-06T22:58:59.462097926Z"}    {"message":"sleeping 10s to retry initializing Stackdriver profiler","severity":"info"} Troubleshooting Attempts 1. Tried adding dummy Google Project IDs (I researched how they should formatted) in various configuration files.   2. Disabled Tracing, Profiling, and Debugging at different times.   3. Confirmed Helm chart installation was successful.   Questions 1. How can I resolve the `CrashLoopBackOff` issue?   2. Is there a way to bypass or properly configure the Stackdriver profiler to avoid requiring a Google Project ID?   3. Are there any additional configurations required to run this demo outside of GCP? Thank you.
I am getting result like this.   query: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/SmartIST.log" |stats count by SmartI... See more...
I am getting result like this.   query: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/SmartIST.log" |stats count by SmartISTINTERFACE instead of above I want a report like this:    
Hello all, I'm trying to reset a host under the DHM tab in TrackMe so I can remove a sourcetype it's not finding that no longer exists. However, when I do this I receive the error below. I have alrea... See more...
Hello all, I'm trying to reset a host under the DHM tab in TrackMe so I can remove a sourcetype it's not finding that no longer exists. However, when I do this I receive the error below. I have already checked my permissions and as an admin have the correct role to admin the app as well. 
All, In the metric browser, I see a DB appear under Backends|Discovered backend call...  I also see the same backend under Overall Application Performance|My Tier|External Calls|Call-JDBC to Discove... See more...
All, In the metric browser, I see a DB appear under Backends|Discovered backend call...  I also see the same backend under Overall Application Performance|My Tier|External Calls|Call-JDBC to Discovered backend... The calls per minute graphs are approximately the same shape, but counts are not even close.  Under backends, the counts are much higher, like 5x higher.  This app only has 1 active tier. Why is there such a large difference in counts?  I would like to get a breakdown of DB calls per tier, but the numbers seem low. thanks
I am looking to have the middle row of this table be in the left instead. I think something in the query is off and causing it to have a weird behavior.  index=main host=* sourcetype=syslog ... See more...
I am looking to have the middle row of this table be in the left instead. I think something in the query is off and causing it to have a weird behavior.  index=main host=* sourcetype=syslog process=elcsend "\"config" $city$ | rex "([^!]*!){16}(?P<MEMGB>[^!]*)!" | chart count by MEMGB | addcoltotals label=Total labelfield=MEMGB | sort count desc This is the current search query. The rex provides the data in the MEMGB column.
How do I return field values from a specific max(eventnumber)? This was helpful but did not solve my issue Solved: How to get stats max count of a field by another f... - Splunk Community We are ... See more...
How do I return field values from a specific max(eventnumber)? This was helpful but did not solve my issue Solved: How to get stats max count of a field by another f... - Splunk Community We are ingesting logs from test devices. Each log has an event number, which I can search on to find the most recent event. When the devices disconnect from our cloud instance, they cache events which are transmitted at a lower priority (newest to oldest) than real time events. For example: event #100 connected to cloud, event 101-103 disconnected from cloud and cached, events, #104 re-connected to cloud (latest status) received, then event 103 is transmitted, then 102, so using latest/earliest or first/last does not return the most recent status The logs consist of an event number and boolean (true/false) fields. Searching for max(event number) and values(boolean field value) results in both true/false for any time picker period that has multiple events, for example: | stats max(triggeredEventNumber) values(isCheckIn) values(isAntiSurveillanceViolation) BY userName userName                 max(triggeredEventNumber)      values(isCheckIn)      latest(isAntiSurveillanceViolation) NS2_GS22_MW    92841                                                   false true                       FALSE In the example the actual value of isCheckIn was true. Here is a complete example event: { "version": 1, "logType": "deviceStateEvent", "deviceSerialNumber": "4234220083", "userName": "NS2_GS22_MW", "cloudTimestampUTC": "2025-01-06T18:17:00Z", "deviceTimestampUTC": "2025-01-06T18:16:46Z", "triggeredEventNumber": 92841, "batteryPercent": 87, "isCheckIn": true, "isAntiSurveillanceViolation": false, "isLowBatteryViolation": false, "isCellularViolation": false, "isDseDelayed": false, "isPhonePresent": true, "isCameraExposed": false, "isShutterOpen": false, "isMicExposed": false, "isCharging": false, "isPowerOff": false, "isHibernation": false, "isPhoneInfoStale": false, "bleMacAddress": "5c:2e:c6:bc:e4:cf", "cellIpv4Address": "0.0.0.0", "cellIpv6Address": "::" }
Hello all !  I need to get data from Splunk Observability (list of synthetics tests) into Splunk cloud. I have tried to use this observability API :  curl -X GET "https://api.{REALM}.signalfx.com/... See more...
Hello all !  I need to get data from Splunk Observability (list of synthetics tests) into Splunk cloud. I have tried to use this observability API :  curl -X GET "https://api.{REALM}.signalfx.com/v2/synthetics/tests" \ -H "Content-Type: application/json" \ -H "X-SF-TOKEN: <value>"  Then, I attempted to execute a cURL query in Splunk Cloud like this : | curl method=get uri=https://api.xxx.signalfx.com/v2/synthetics/tests?Content-Type=application/json&X-SF-TOKEN=xxxxxxxxxxx | table curl* but i am getting the following error : HTTP ERROR 401 Unauthorized. Thanks for any help !    
Hi, Could you pls let me know in what scenario would we use eventstats vs stats?
We are planning to upgrade our Splunk hardware. We currently have below(multisite indexer cluster with independant search head clusters) and we are facing problems with low cpu count and high disk la... See more...
We are planning to upgrade our Splunk hardware. We currently have below(multisite indexer cluster with independant search head clusters) and we are facing problems with low cpu count and high disk latency(we currently have HDDs). We primarily index data through HEC.   Type Site Number of nodes CPU p/v (per node) memory GB (per node) SH cluster 1 4 16/32 128 Indexer cluster 1 11 4/8 64 Indexer manager/License master 1 1 16/32 128 SH cluster 2 4 16/32 128 Indexer cluster 2 11 4/8 64 Indexer manager/License master 2 1 16/32 128   Daily indexing/license usage 400-450GB which may grow further in near future Search concurrency example for one instance from 4 node SH cluster   We are trying to come up with the best hardware configuration that can support such load.   Looking at Splunk recommended settings, we have comeup with below config. Can someone shed more light on if this is an optimal config and also advise on the number of SH machines and indexer machines needed with such new hardware Site1: 3 node SH clusters, 7 node idx cluster Site2:  As we are using site2 for searching and indexing only during unavailability of site1, may be it can be smaller? Role CPU (p/v) Memory Indexer 24/48 64G Non indexer 32/64 64G
I am working om creating a dashboard to display data from my app-I have a dropdown where you select which environment you want to see data for-I need to set 2 values based on this dropdown: 1.connec... See more...
I am working om creating a dashboard to display data from my app-I have a dropdown where you select which environment you want to see data for-I need to set 2 values based on this dropdown: 1.connection for db queries 2. host for logs based queries I searched many option but couldn't get any to work:I am trying to do  <fieldset submitButton="false"> <input type="dropdown" token="connection"> <label>Select Region</label> <default>dev-platform-postgres</default> <choice value="dev-platform-postgres">US</choice> <choice value="dev-platform-postgres-eu">EU</choice> <change> <condition label = 'dev-platform-postgres'> <set token="host">eks-prod-saas-ue1-*</set> </condition> <condition label = 'dev-platform-postgres-eu'> <set token="host">prd-shared-services-eu-eks*</set> </condition> </change> </input> </fieldset> and then be able to use both $host$ and $connection$ tokens in the dashboard but I cant get $host$ initialized correctly any help would be appreciated also -side note I am getting a warning "Expected at most 1 children of fieldset in dashboard, instead saw 2"-how am I supposed to handle a case where I want 2 selections -one of date and one of connection?
Can i do the wildcard matching in lookup? |makeresults |eval ip=192.168.101.10 |lookup ip.csv ip output host In my lookup i have two entry ip=192.168.101.10 & ip=192.168.101.10/24. How can i add... See more...
Can i do the wildcard matching in lookup? |makeresults |eval ip=192.168.101.10 |lookup ip.csv ip output host In my lookup i have two entry ip=192.168.101.10 & ip=192.168.101.10/24. How can i add wildcard (*) for match and i should get two entry.  
Hello, I'm setting up StatsD to send custom metrics from an AWS EC2 instance, where the Splunk OpenTelemetry Collector is running to Splunk Observability Cloud. I've configured StatsD as a receiver... See more...
Hello, I'm setting up StatsD to send custom metrics from an AWS EC2 instance, where the Splunk OpenTelemetry Collector is running to Splunk Observability Cloud. I've configured StatsD as a receiver using guidelines from the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver. Here's my configuration for StatsD configured in the agent_config.yaml file. receivers: statsd: endpoint: "localhost:8125" aggregation_interval: 60s enable_metric_type: false is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "distribution" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "timing" observer_type: "summary" The GitHub documentation provides exporter configurations, but I'm unsure how to implement them effectively. As per the github document below is mentioned. exporters: file: path: ./test.json service: pipelines: metrics: receivers: [statsd] exporters: [file] Below is the receivers configuration which I am setting in the service configuration section in the in the agent_config.yaml as mentioned below: service: pipelines: metrics: receivers: [hostmetrics, otlp, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] When I add "statsd" ("receivers: [hostmetrics, otlp, signalfx, statsd]" and "exporters: [signalfx]") as one of the more receivers as mentioned above and restart the "systemctl restart splunk-otel-collector.service", splunk otel collector agent stop sending any metric to the Splunk Observability Cloud and when I remove statsd (receivers: [hostmetrics, otlp, signalfx]) then splunk otel collector agent starts sending any metric to the Splunk Observability Cloud. What should be correct/supported ad receiver/exporter to be configured in the service section for the statsd? Thanks
HI all I have a scenario where i have to find the difference of two field value (string) for example fileda="raj", "rahul", "rohan" filedb="rahul", "rohan" i need to have a third field as differe... See more...
HI all I have a scenario where i have to find the difference of two field value (string) for example fileda="raj", "rahul", "rohan" filedb="rahul", "rohan" i need to have a third field as difference of the above two filed fieldc="raj" I am running out of ideas as how to do it. Can someone please help in this  
My requirement is to pass the tokens via drill down from parent dashboard to drill down dashboard (which is created with Java scripts) From parent dashboard, I tried to pass the token via drill down... See more...
My requirement is to pass the tokens via drill down from parent dashboard to drill down dashboard (which is created with Java scripts) From parent dashboard, I tried to pass the token via drill down URL to the java scripted dashboard, but that did not work out. Can anyone please help me in passing the tokens via drill down to the target dashboard which is created with Java script?
Hi everyone, want to ask why i cannot download data from dashboard. if i make it 4 hours it can, and if i want download in another date ex (1 Jan 2025) its fine, it is only happen when i want downloa... See more...
Hi everyone, want to ask why i cannot download data from dashboard. if i make it 4 hours it can, and if i want download in another date ex (1 Jan 2025) its fine, it is only happen when i want download data in (4 Jan 2025).  Since i cannot access any website / media to screenshot from client so i only can photo.  Thank you
Back again with another question. I'm still playing with my search and whle this is an issue I've managed to work around, the fact that I need to work around it without knowing the why behind it eats... See more...
Back again with another question. I'm still playing with my search and whle this is an issue I've managed to work around, the fact that I need to work around it without knowing the why behind it eats at me. I have a search that pulls data from two different sourcetypes, and each of those sourcetypes have a src_mac field(the data in these fields is identical except for the letter case). To rectify the issues this causes when attempting to call the field in a search, I use eval to create two new fields with the sourcetype of each event so that the field names are now unique(in addition to fixing the letter case mismatch). Specifically, this creates two fields named "src_mac-known_devices" and "src_mac-ise:syslog" | eval src_mac-{sourcetype}=src_mac, src_mac=upper(src_mac) | where upper("src_mac-*") = upper("src_mac-*") However, in the WHERE command, I'm only able to call these two new fields when I use a wildcard. I can't actually put in | WHERE upper("src_mac-bro_known_devices") = upper("src_mac-ise:syslog") The command just doesn't work for some reason, and I get zero hits despite *knowing* I should get plenty of hits. In other words, it works fine when I use the wildcard and not at all when I use anything else. Even attempting to do something like | where upper("src_mac-b*") = upper("src_mac-c*") doesn't work.  I have read through the wiki articles on proper naming practices for fields, so I know my two fields contain illegal characters. I also know the : is used when trying to search indexed fields, but I thought I could use single or double quotation marks to work around that limitation or maybe using the / to escape the special characters.....but none of that has worked. At this point, I just want to understand *why* it isn't working. Thank you for any help anyone can provide.
Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are bein... See more...
Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are being generated unexpectedly. After reviewing the details, I added the command `| stats count BY process_name` to analyze the data more precisely. After executing this, the result was 389 processes within 24 hours. However, it seems there might be false positives and I’m unable to determine if this alert is normal or if there’s a misconfiguration. I would appreciate any help in identifying whether these alerts are expected or if there is an issue with the configuration or the rule itself. Any assistance or advice would be greatly appreciated. Thank you in advance.  
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're pass... See more...
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're passed and doesn't mention any actual score.
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run c... See more...
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run config. changes via Chef, and update it.  But, the 5 node forwarders are being treated as fully replaceable with Terraform and Chef. Everything is working, but I notice the Deployment server holds onto forwarders after Terraform destroys the old one, and the new one pings home on a new IP(currently on DHCP), but with the same hostname as the destroyed forwarder.  Would replacing the forwarders with the same static IP and Hostname resolve that, or would there still be duplicate entries? Deployment server: Oracle Linux 8.10 Splunk-enterprise 8.2.9 Forwarders: Oracle Linux 8.10 Splunkforwarder 8.2.9