All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm attempting to calculate the average of the last six CPU event values. If the average of those six events is greater than 95%, an alert must be sent. I basically tried the below query, but it... See more...
Hi, I'm attempting to calculate the average of the last six CPU event values. If the average of those six events is greater than 95%, an alert must be sent. I basically tried the below query, but it produced nothing. Can someone help? index=* sourcetype=cpu CPU=all host=* earliest=-35m | rename "%_Idle_Time" as Percent_Idle_Time | eval CpuUsage=coalesce(100-Percent_Idle_Time,100-PercentIdleTime) | streamstats count by host | where count<=6 | stats avg(values(CpuUsage)) as "Average of CpuUsage last 6 intervals(5mins range)" by host   Regards, Satheesh      
I am getting page not reachable after I finished installing splunk enterprise in a AWS virtual machine, I completed all the commands provided by this page. Can you help me ? Please  
Hi Guys, I am trying to learn Phantom app development using an on-prem phantom installation, and have come across really weird behavior with adding data to action_results. If I have some data I wan... See more...
Hi Guys, I am trying to learn Phantom app development using an on-prem phantom installation, and have come across really weird behavior with adding data to action_results. If I have some data I want to add, say: data = ["abc", "def", "ghi", "jkl"] it makes sense that I might want to do something like: for each d in data:     action_result.add_data(d) and expect to get an action result with 4 entries... instead what results is that I get an action result with 4 duplicates of the above data, effectively 16 entries: [["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"]] Maybe this is intended behavior? To me this is weird, but since this is in my own app I just have to find ways to get around it. However, this behaviour also exists in all the other apps such as the splunk app. If I use the splunk app to make a search against my splunk instance say with the query index=test | head 6 then I would expect to get 6 results, however since the splunk app is also iterating over the results it recieves and uses the add_data method, the action results end up being 6 duplicate lists of 6 entries, so effectively 36 results. I am unable to parse this in any playbook blocks. If I write JUST custom code blocks then I can extract the desired results but then what is the point of playbooks if I am just writing everything in python code anyway. Also what if I expect my search to return 1000 results? Having the action result grow exponentially means that the action result will be 1,000,000 items which gets ridiculous. Is this expected behaviour? if so how do I get the results using the GUI playbook editor? Or is my Phantom instance borked somehow? (I ran the normal installer, haven't made any changes to my instance)
Hello, I have some issues with field extraction since value pair and non-value pair fields are within the same event. Not sure how implement Regex to extract these fields. A few sample events are g... See more...
Hello, I have some issues with field extraction since value pair and non-value pair fields are within the same event. Not sure how implement Regex to extract these fields. A few sample events are giving below. Value pair (with Underline) and non-value pair (in Bold and values separated by space) have been marked for one of the sample events. Any recommendation will be highly appreciated. Thank you. [2023-04-25 07:43:23,923] INFO  signin           2055ddf870d6un9d1  6567bfb signIn SUCCESS user:bn4bfb monitorId:2056dhf40d6b9d1 IPaddr:15.218.61.1 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"64b9ib" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:44:01,520] INFO  signin           009012cf0cce64c7  rmk9ddb signIn SUCCESS user:o0glddb monitorId:00amki2cf0cce6c7 IPaddr:15.198.2.35 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/101.0.1661.41" userDescription:"ugdi8db" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:45:13,632] INFO  signin           b9660cc3afe54c2  j56lb signIn SUCCESS user:j79lb monitorId:bop9060cc3afe54c2 IPaddr:10.209.23.194 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"jw908b" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:46:09,358] INFO  signin           0904c268c6b7e9d58  jw095lb signOut SUCCESS user:090wjlb monitorId:59c9098c6b7e9d5io [2023-04-25 07:46:47,077] INFO  signin           ee2bop9853a5623c  65co9b signIn SUCCESS user:6op0bb monitorId:ee2klo853a562op IPaddr:10.54.190.56 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"6op0bb" sessionType:STANDARD browser:Chrome(111) os:windows  
Hi All, Is there a way to import a panel into dashbaord studio, in a similar way to simple xml. Simple XML: This would import/insert that panel into the dashboard, so I could manage all filters... See more...
Hi All, Is there a way to import a panel into dashbaord studio, in a similar way to simple xml. Simple XML: This would import/insert that panel into the dashboard, so I could manage all filters/inputs in a global application and not have to manage it within the dashboard. <row> <panel id="overview_filter" ref="overview_filter" app="Filters"></panel> </row> Why: Imagine you have n dashboards, all which use the same set of inputs/token. Instead of writing the inputs/token code n times (once per dashboard), you an write it once and import it with the panel and panel-id.  The benefit here, apart from less coding, is that when ever anyone wants to change the name or title of a token, or they want to add a new token - you only have to do it in 1 place and it altomatically goes into all dashboards. I could not find the same way to import code/panels into dashboard studio.  I am sure it exists - my searches just couldnt find how to do it. cheers -brett  
Need help in creating splunk query to show value of fields as Zero having null values and for numeric it should show exact count.   For example -  I want to search for all the events if all fiel... See more...
Need help in creating splunk query to show value of fields as Zero having null values and for numeric it should show exact count.   For example -  I want to search for all the events if all fields having specific keywords I am searching. And for the others, if that keyword is not available in that field value , then it should as 0 count
Hello all!   I am attempting to dynamically add 'Next Steps' to a notable event based off a lookup table in my Correlation Search Splunk Query. I was wondering if it is possible to do this using Va... See more...
Hello all!   I am attempting to dynamically add 'Next Steps' to a notable event based off a lookup table in my Correlation Search Splunk Query. I was wondering if it is possible to do this using Variable Substitution?    For example if my notable name is X, then populate the 'Description' and 'Next Steps' columns with the associated  fields in the lookup table.   If this is not possible at the moment, can anyone suggest another way that I could get this data to dynamically populate?   Thanks!
I'm wanting to avoid using saved searches and lookup tables as much if possible so it's easily maintainable by anyone on the team. Also, I'm wanting to make it as future proof as possible so it "just... See more...
I'm wanting to avoid using saved searches and lookup tables as much if possible so it's easily maintainable by anyone on the team. Also, I'm wanting to make it as future proof as possible so it "just works" with little need to update or modify. My end goal is to create a query that produces a True/False (or equivalent) result for each value when compared to the max value of the same field. To explain in more detail: I'm wanting the query to use the latest version of the Trellix/McAfee Agent reported in Splunk and then compare that value against the full set and return True/False if the numbers match. I can get exactly what I need using the query below, but it needs to be manually updated every time the Agent version is updated.   source=trellix AgentVer=* | eval AgentStatus=if(AgentVer=="5.7.9.182", "True","False") | stats count BY AgentStatus   Simple Where this gets complicated is when I try to isolate the latest version. I've tried all kinds of ways to extract that version number and put it into its own field and then do the comparison and nothing I've tried works.  Here's an example of what I have tried, but this is not exhaustive because I've tried 500 different ways...   <!-- This query produces the version I need into a new field --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV <!-- Then I try to compare the value in the new TAV field to the old field --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV | eval Status=if(AgentVer==TAV, "True","False") | table Status <!-- No good --> <!-- So then I try to take it a step further --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV | rex field=TAV (?<TA>"^(?:^\d+(\.\d+)+$)") | eval Status=if(AgentVer==TA, "True","False") | table Status <!-- No good --> <!-- Ok, maybe a subsearch will work --> source=trellix AgentVer=* [search source=trellix AgentVer=* | stats max(AgentVer=*) AS TA | table TA] | eval Status=if(AgentVer=TAV, "True","False") | table Status <!-- No good -->   Again, the above are just examples of what I've tried. I've tried replacing | stats max(AgentVer) with | eval TA=max(AgentVer), I've tried chart instead of stats, and etc. I've even tried to just duplicate the field and use the duplicate instead of the original and still no luck. I've not found anything that can do what I'm trying to do. I hope it's possible but maybe I'm reaching here.   Does the community have any recommendations for how to solve this? Thank you ahead of time!
I'm using a timechart visualization as a drilldown on a dashboard where the time range is controlled by radio buttons with options of: -24h@h, -7d@d, -30d@d, -60d@d, -90d@d, -6mon@d, and -1y@d.  ...... See more...
I'm using a timechart visualization as a drilldown on a dashboard where the time range is controlled by radio buttons with options of: -24h@h, -7d@d, -30d@d, -60d@d, -90d@d, -6mon@d, and -1y@d.  ...... | timechart count by "Site Name" Most everything works fine but when I switch select -6mon@d or -1y@d the timechart no longer displays the events with their actual date and instead labels all of them as the first of the month (i.e. July 1, 2023).    I imagine this is something to do with timechart's automatic grouping based on time range but is there a way to disable this and have the events displayed with their actual date?   Not only is it important for analysis purposes, I have a drilldown of this timechart that shows the specific event data but my search is dependent on the timechart returning the specific date. See search below: ........ | eval dtg=strftime($dd_earliest$, "%d %b %Y") | where Start=dtg AND 'Site Name'="$selected_site$" These values are set in the drilldown stanzas of the search: | timechart count by "Site Name" <set token="selected_site">$click.name2$</set> <eval token="dd_earliest">$click.value$</eval>
I have a savedsearch running on a 5 minute cron schedule iteratively working through a list of previously saved search parameters. 2 Things (1) Can I have a conditional CRON schedule such that I so... See more...
I have a savedsearch running on a 5 minute cron schedule iteratively working through a list of previously saved search parameters. 2 Things (1) Can I have a conditional CRON schedule such that I somehow detect when work needs to be performed and if so, enable the CRON? The processing for a day may take 6 hours, but the CRON keeps running and burning resources. (2) Some of the savedsearches run in < 1 min but others take longer than 5 minutes. Instead of using a CRON schedule, can I detect the savedsearch ID, detect when it has completed and then initiate the subsequent execution of the savedsearch on the next batch of data?  
  Hey, I am using addon builder version 4.1.3 and I have so many addons in that suddenly Addon builder home page is displaying blank I have checked collection.conf in addon_builder_app under lo... See more...
  Hey, I am using addon builder version 4.1.3 and I have so many addons in that suddenly Addon builder home page is displaying blank I have checked collection.conf in addon_builder_app under local, I checked all addon stanzas with compare to /etc/apps but it didn't work can anyone find a solution of this one  
We want event to separated for each header whenever there is new entry in the csv file. what would be the props applied to the sourcetype to have a single event  sample file   want details... See more...
We want event to separated for each header whenever there is new entry in the csv file. what would be the props applied to the sourcetype to have a single event  sample file   want details in one event whenever there is header inserted in csv file please suggest
Hello, I have an index with a field that record how long a computer has been running. Basically, when I display the information of a computer on 2 days I get this : I would like to get the max ... See more...
Hello, I have an index with a field that record how long a computer has been running. Basically, when I display the information of a computer on 2 days I get this : I would like to get the max value before each 'shutdown',  where the value reset to 0 after. Any simple way I could do that ?
I have created an  Alert when the response time is high and service is down and scheduled it on Cron job which is set to every 2 minutes. So it is notifying me when the Server is down. But I also ... See more...
I have created an  Alert when the response time is high and service is down and scheduled it on Cron job which is set to every 2 minutes. So it is notifying me when the Server is down. But I also want to create an Alert which will show the recovery Alert means when the service is Up again the alert should trigger only one time after the down. I have created this Up alert also by putting the condition low response time but it is triggering every 2 minutes and sending an email. Which is actually not required. So I just need only one email notification after down that service is Up again and running normally.    
Hi All, We are trying to upgrade the splunk universal forwarder from version 8.1.0 to 9.0.3 using ansible scripts. But we are getting error when the script tries to start the forwarder. Herewith att... See more...
Hi All, We are trying to upgrade the splunk universal forwarder from version 8.1.0 to 9.0.3 using ansible scripts. But we are getting error when the script tries to start the forwarder. Herewith attached error and ansible playbook. Ansible playbook: - name: Splunk Upgrade | Copy tgz to target copy: src: /pub/splunk/splunkpackages/{{ splunk_package }} dest: /tmp/{{ splunk_package }} - name: Splunk Upgrade | Check for SYSV scripts stat: path: /etc/rc.d/init.d/splunk register: splunk_sysv - name: Splunk Upgrade | Stop Splunk shell: | {{ splunk_home }}/bin/splunk stop tar -cvf /opt/splunk_config_backup.tar {{ splunk_home }}/etc/ - name: Splunk Upgrade | Clean up SYSV scripts shell: | rm /etc/rc.d/init.d/splunk /opt/splunkforwarder/bin/splunk disable boot-start when: splunk_sysv.stat.exists ignore_errors: yes - name: Splunk Upgrade | Upgrade Forwarder and restart ==> in this task it getting failed shell: | cd /opt tar -xzvf /tmp/{{ splunk_package }} chown -R splunk:splunk /opt/splunkforwarder {{ splunk_home }}/bin/splunk start --accept-license --answer-yes --no-prompt register: splunk_upgrade - name: Splunk Upgrade | Convert SYSV to Systemd shell: | {{ splunk_home }}/bin/splunk stop chown -R splunk:splunk /opt/splunkforwarder /opt/splunkforwarder/bin/splunk enable boot-start -user splunk when: splunk_sysv.stat.exists - name: Splunk Upgrade | start and enable splunk service: name: SplunkForwarder.service enabled: true state: started - name: Splunk Upgrade | Cleanup tgz file: state: absent path: /tmp/{{ splunk_package }}   Error in splunk forwarder log:  
Hello, I have this bar graph with a static bar to show a deadline. Needed to change the little bar position dynamically when the dates changes. Is that possible?  
hi, below query is used for the drill down used for my line graph. | savedsearch XYZ | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(str... See more...
hi, below query is used for the drill down used for my line graph. | savedsearch XYZ | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | where $apps$ and $bscode$ and $function$ and $dept$ and $country$ and $emp_type$ | search $usertype|s$ = $monthname|s$ | table Function, BS_ID, APP_NAME, MUID, FIRST_NAME, LAST_NAME, FROM_DATE, TO_DATE, LASTLOGON, COUNTRY, CITY, DEPARTMENT_LONG_NAME, "Business Owner", SDM, "System Owner", "Validation Owner" the above query looks like this in the search panel: | savedsearch hourradata2 | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") | search "Active" = "June-23" | table Function, BS_ID, APP_NAME, MUID, FIRST_NAME, LAST_NAME, FROM_DATE, TO_DATE, LASTLOGON, COUNTRY, CITY, DEPARTMENT_LONG_NAME, "Business Owner", SDM, "System Owner", "Validation Owner" if the highlighted one's removed then the query gives result. 
Trust everyone is doing good, I noticed for a particular application being monitored the total response time being displayed on appdynamics is not the true value,it shows a lower response time that ... See more...
Trust everyone is doing good, I noticed for a particular application being monitored the total response time being displayed on appdynamics is not the true value,it shows a lower response time that just isnt possible. Checking individual transaction response time it displays the true value but the total is just wrong What could be he cause of this
  Dataframe row : {"_c0":{"0":"[","1":" {","2":" \"table_name\": \"pc_dwh_rdv.gdh_ls2lo_s99\"","3":" \"deleted_count\": 18","4":" \"redelivered_count\": 0","5":" \"load_date\": \"2023-07-27\"","6":... See more...
  Dataframe row : {"_c0":{"0":"[","1":" {","2":" \"table_name\": \"pc_dwh_rdv.gdh_ls2lo_s99\"","3":" \"deleted_count\": 18","4":" \"redelivered_count\": 0","5":" \"load_date\": \"2023-07-27\"","6":" }","7":" {","8":" \"table_name\": \"pc_dwh_rdv.gdh_spar_s99\"","9":" \"deleted_count\": 8061","10":" \"redelivered_count\": 1","11":" \"load_date\": \"2023-07-27\"","12":" }","13":" {","14":" \"table_name\": \"pc_dwh_rdv.gdh_tf3tx_s99\"","15":" \"deleted_count\": 366619","16":" \"redelivered_count\": 0","17":" \"load_date\": \"2023-07-27\"","18":" }","19":" {","20":" \"table_name\": \"pc_dwh_rdv.gdh_wechsel_s99\"","21":" \"deleted_count\": 2","22":" \"redelivered_count\": 0","23":" \"load_date\": \"2023-07-27\"","24":" }","25":" {","26":" \"table_name\": \"pc_dwh_rdv.gdh_revolvingcreditcard_s99\"","27":" \"deleted_count\": 1285","28":" \"redelivered_count\": 0","29":" \"load_date\": \"2023-07-27\"","30":" }","31":" {","32":" \"table_name\": \"pc_dwh_rdv.gdh_phd_s99\"","33":" \"deleted_count\": 2484","34":" \"redelivered_count\": 204","35":" \"load_date\": \"2023-07-27\"","36":" }","37":" {","38":" \"table_name\": \"pc_dwh_rdv.gdh_npk_s99\"","39":" \"deleted_count\": 1705","40":" \"redelivered_count\": 0","41":" \"load_date\": \"2023-07-27\"","42":" }","43":" {","44":" \"table_name\": \"pc_dwh_rdv.gdh_npk_s98\"","45":" \"deleted_count\": 1517","46":" \"redelivered_count\": 0","47":" \"load_date\": \"2023-07-27\"","48":" }","49":" {","50":" \"table_name\": \"pc_dwh_rdv.gdh_kontokorrent_s99\"","51":" \"deleted_count\": 12998","52":" \"redelivered_count\": 0","53":" \"load_date\": \"2023-07-27\"","54":" }","55":" {","56":" \"table_name\": \"pc_dwh_rdv.gdh_gds_s99\"","57":" \"deleted_count\": 13","58":" \"redelivered_count\": 0","59":" \"load_date\": \"2023-07-27\"","60":" }","61":" {","62":" \"table_name\": \"pc_dwh_rdv.gdh_dszins_s99\"","63":" \"deleted_count\": 57","64":" \"redelivered_count\": 0","65":" \"load_date\": \"2023-07-27\"","66":" }","67":" {","68":" \"table_name\": \"pc_dwh_rdv_gdh_monat.gdh_phd_izr_monthly_s99\"","69":" \"deleted_count\": 1315","70":" \"redelivered_count\": 0","71":" \"load_date\": \"2023-07-27\"","72":" }","73":"]"}}   The above is the sample message of an event which we have in splunk we want to extract the deleted count values like "1315", "57", "13" etc and add those values as a separate fields using rex command . Also from the above message we want to extract load_date value such as 2023-07-27 and add that value as a separate field. Please help us in this.
Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do... See more...
Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do not appear in the dashboard when they used to appear. Any idea where to start?