All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @bowesmana, Thank you for your explanation and assistance. I am building a dashboard for historical data. 1) Choose Time (Time picker) and Time Frame (drop down)  Choose Time Start ... See more...
Hello @bowesmana, Thank you for your explanation and assistance. I am building a dashboard for historical data. 1) Choose Time (Time picker) and Time Frame (drop down)  Choose Time Start End   Chose Time Frame Date Range 2/5/2024 3/2/2024   Weekly 2) This is only dummy data, the real data (not student grade) is based on summary index of summary index       For each day a number is calculated from another summary index from 30 day data For example:   Math Grade (10) for Student1 on 2/8/2024 is from 1/7/2024 - 2/8/2024, and so on This is the reason why I don't want to keep data on specific date and don't want to shift data using snap This is student date with transpose. I put coloring code. Blue=Thursday  Monday=Red  Tuesday=Green   Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Student 2/8/2024 2/9/2024 2/10/2024 2/11/2024 2/12/2024 2/13/2024 2/14/2024 2/15/2024 2/16/2024 2/17/2024 2/18/2024 2/19/2024 2/20/2024 2/21/2024 2/22/2024 2/23/2024 2/24/2024 2/25/2024 2/26/2024 2/27/2024 2/28/2024 2/29/2024 3/1/2024 Student1 10 9 8 7 6 5 6 7 8 9 10 10 9 8 7 6 5 6 7 8 9 10 10 If I choose start date 2/8/2024, the weekly will start on Thursday   Thursday Thursday Thursday Thursday Student 2/8/2024 2/15/2024 2/22/2024 2/29/2024 Student1 10 7 7 10 If I choose start date 2/5/2024, the weekly will start on Monday   Monday Monday Monday Monday Student 2/5/2024 2/12/2024 2/19/2024 2/26/2024 Student1 NULL 6 10 7 If I choose start date 2/6/2024, the weekly will start on Tuesday   Tuesday Tuesday Tuesday Tuesday Student 2/6/2024 2/13/2024 2/20/2024 2/27/2024 Student1 NULL 5 9 8 I can start from 2/8/2024 (Thursday Weekly) by using the following search:   | search Student="Student1" | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | timechart span=1w first(MathGrade) by Student useother=f limit=0 | fields - _span _spandays | eval _time = strftime(_time,"%m/%d/%Y") | transpose 0 header_field=_time column_name=Student   I can start from 2/12/2024 (Monday Weekly) by using the following search:   | search Student="Student1" | where (_time % 604800) = 363600 | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | timechart span=1w first(MathGrade) by Student useother=f limit=0 | fields - _span _spandays | eval _time = strftime(_time,"%m/%d/%Y") | transpose 0 header_field=_time column_name=Student   How do I start from 2/13/2024 (Tuesday Weekly)? Please suggest. I appreciate your help. Thank you
Hi all, I'm struggling with problem that I can't find any error logs in Asset and Identity Management dashboard in Splunk Enterprise Security. It shows NOT FOUND and I see the error message behind i... See more...
Hi all, I'm struggling with problem that I can't find any error logs in Asset and Identity Management dashboard in Splunk Enterprise Security. It shows NOT FOUND and I see the error message behind is "You need edit_modinput_manager capability to edit information." . But I'm an admin that already have this permission.  Hope anyone can tell me how can I fix this error.  Thank you.   
| timechart span=1m eval(sum(is_slow)/count) by v | rename NULL as ratioOfSlow did the job for me. 
The solutions that use props.conf are available to Splunk Cloud users.  Put the props.conf file into an app and upload the app to your Splunk Cloud search head.  Once it passes vetting, click to inst... See more...
The solutions that use props.conf are available to Splunk Cloud users.  Put the props.conf file into an app and upload the app to your Splunk Cloud search head.  Once it passes vetting, click to install it and the props will be put in the right place(s).
allow_skew won't stop alerts from triggering every 5 minutes.  To stop the alerts you have a few options 1) Stop whatever is triggering the alerts 2) Change the threshold of the alert so it's less ... See more...
allow_skew won't stop alerts from triggering every 5 minutes.  To stop the alerts you have a few options 1) Stop whatever is triggering the alerts 2) Change the threshold of the alert so it's less likely to be triggered 3) Run the alert less frequently 4) Some combination of the above
We had a problem with our syslog server and a bunch of data went missing in the ingest. The problem was actually caused by the UF not being able to keep up with the volume of logs before the logrotat... See more...
We had a problem with our syslog server and a bunch of data went missing in the ingest. The problem was actually caused by the UF not being able to keep up with the volume of logs before the logrotate process compressed the files, making them unreadable. I caught this in progress and began making copies of the local files so that they would not get rotated off the disk. I am looking for a way to put them back into the index in the correct place in _time. I thought it would be easy but it is turning out harder than I expected.  I have tried making a Monitor inputs for a local file, and cat/printf the log file into the monitored file. I have also tried to use the "add oneshot" cli command, neither way has gotten me what I am wanting. The monitored file kind of works, and I think I could probably make it better given some tweeking.  The "add oneshot" command actually works very well and it is the first time I am learning about this useful command. My problem I believe is that the sourcetype I am using is not working as intended. I can get data into the index using the oneshot command and it looks good, as far as breaks the lines into events, etc. The problem I am seeing is all the parsing rules that are included with the props/transforms in the Splunk_TA_paloalto addon are not being applied effectively. Splunk is parsing some fields but I suspect it is guessing based on the format of the data. When I look at the props.conf for the TA, I see it uses a general stanza called [pan_log] but inside the config will transform the sourcetype into a variety of different sourcetypes based on the type of log in the file (there is at least 6 possibilities).   TRANSFORMS-sourcetype = pan_threat, pan_traffic, pan_system, pan_config, pan_hipmatch, pan_correlation, pan_userid, pan_globalprotect, pan_decryption   When I use the oneshot command, the data goes into the index and I can find it by specifying the source, but none of this transforms is happening, so the logs are not separated into the final sourcetypes.  Has anybody ran into a problem like this and know a way to make it work? Or have any other tips that I can try to make some progress on this? One thing I was thinking is that the Splunk_TA_paloalto addon is located on the indexers, but not on the server that has the files that I am doing the oneshot comamnd from. I expected this would all be happening on the indexer tier, but maybe I need to add it locally so splunk knows how to handle the data.  Any ideas?
Why can I not find any documentation on Firewall Investigator module of Splunk Enterprise?
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/fol... See more...
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/folding Saving and retrieving queries Inline command help You can check it out at Splunk Search Assistant (google.com) Julio
Hi @yuanliu , thank you for the inputs. As we have more number of alerts to be done. We want to go with CSV option. I will create CSV file and will add time,  date and month, but I am not sure how ... See more...
Hi @yuanliu , thank you for the inputs. As we have more number of alerts to be done. We want to go with CSV option. I will create CSV file and will add time,  date and month, but I am not sure how to link with that in query  Can you please help me on that 
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some so... See more...
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some solutions use props.conf but I am on Splunk Cloud 
Not sure if I understand.  What is "it" and what is "them" for which "it" works?  Again, this is not a search question.  Perhaps better suited for Splunk Dev forum.  It looks like related to some dat... See more...
Not sure if I understand.  What is "it" and what is "them" for which "it" works?  Again, this is not a search question.  Perhaps better suited for Splunk Dev forum.  It looks like related to some data, perhaps encrypted data. You can read /opt/splunk/etc/apps/search/bin/sendemail.py and its imported modules and diagnose this message.  (_csv is one of those modules.)  But volunteers will not have sufficient information to really pin point.
Mere count of field_AB is not particularly insightful. (Also, what are the counts?  Are they unique value counts or counts of events that has field_AB?  There are many other metrics for either of the... See more...
Mere count of field_AB is not particularly insightful. (Also, what are the counts?  Are they unique value counts or counts of events that has field_AB?  There are many other metrics for either of these to be useful.) Let me clarify something.  In your latest illustration of Inital Query Results, you are having problem populating field_D, field_C, and field_E corresponding to select values of field_AB.  Is this correct?  Your illustration cannot be a precise representation because it is impossible for groupby field_AB to have the same value "UniqueID" in every row.  This means that your observed missing values of field_D, field_C, and field_E has something to do with your actual data. In other words, coalesce, stats, and groupby are all functioning as designed.  All those "gaps" in field_D, field_C, and field_E only means that source_1 and source_b contain different sets of unique values of field_AB (field_A in source_1, field_B in source_2).  This is a simple test:   index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | eval source = case(source == "source_2", "source_2", sourcetype == "source_1", "source_1", true(), "mingled") | stats dc(field_AB) by source   Are the count the same for both sources? Are there any "mingled" counts? (Note one of your searches uses sourcetype as filter, the other uses source.  This is generally a bad strategy because they will confuse you three months from now.) If you need concrete help, you must post anonymized raw data, and show us which data set cause the problem.  Alternatively, you need to inspect data closely; specifically, how are field_D and field_E populated with field_A in source_1, and how is field_C populated with field_B in source_2.
Something like   |tstats count max(_time) as _time ``` you will need latest _time ``` where index=app-idx host="*abfd*" sourcetype=app-source-logs by host | eval dom = strftime(_time, "%m"), hod ... See more...
Something like   |tstats count max(_time) as _time ``` you will need latest _time ``` where index=app-idx host="*abfd*" sourcetype=app-source-logs by host | eval dom = strftime(_time, "%m"), hod = strftime(_time, "%H") | where NOT dom IN (10, 18, 25) AND (8 > hod OR hod > 11) ``` whether AND or OR depends on exact semantic ``` | fields - _time dom hod   Several points of discussion. Semantics of "on 10 , 18, 25 and during 8am to 11am" is very loose in English.  Two opposite interpretations can be conveyed by this same phrase.  So you need to tune the logic according to your intention. _time is taken as the latest in dataset by host.  Depending on your data density, you may want to take some other approach, such as info_endtime. It is probably better to code your exclusions in a CSV than hard code in search.  But that's out of the scope of this question.
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support RE... See more...
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support REST?  If it does not currently use REST, are there plans to deliver a future version which does? Thanks.
There are many possible ways of choosing time, so how do you want it to work if a) user selects from 00:00 yesterday to now - (bit more than one day) b) last 24 hours - with end time as "now" not b... See more...
There are many possible ways of choosing time, so how do you want it to work if a) user selects from 00:00 yesterday to now - (bit more than one day) b) last 24 hours - with end time as "now" not beginning of minute, so tiny bit more than one day c) yesterday + day before to now, 2.xx days and other permutations of part days. Using timechart would be the general way to go, which will handle the majority of cases, but then if you can define the conditions for the display of a single row, then all you need to do is set a token which contains this as the token value <set token="use_single_row">| addcoltotals | tail 1 | fields - _time</set> and then in your search, do ... | timechart span=1d sum(b) as b by env $use_single_row$ and to determine your conditions of the time range selected, make a search at the start of your XML like this <search> <query> | makeresults | addinfo | eval gap=if(info_max_time - info_min_time < (86400+60), 1, 0) </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <done> <eval token="use_single_row">if($result.gap$=1, "| addcoltotals | tail 1 | fields - _time", "")</eval> </done> </search> which assumes your time picker token is called time_picker. This will run when the user selects the time and calculates the gap between search start and end and if it's less then 24 hours will set the new token to sum the fields. You could also do this as two different panels and show one or the other depending on the gap using the <panel depends="$token$"> construct, but that depends on how you end up displaying these results.
Thanks for the solution. However, there is an additional case to be handled. Your solution handles the 1 day timeframe. However, we also want to see totals for timeframes > 1 day by day, i.e., licens... See more...
Thanks for the solution. However, there is an additional case to be handled. Your solution handles the 1 day timeframe. However, we also want to see totals for timeframes > 1 day by day, i.e., license usage by environment by day. Is there a way to make the display of the stats dashboard conditional depending on the selected time frame? 
When you bin by time, the first part of the time specifier indicates the size of the bin, e.g. span=1w You are looking to make buckets of 7 days - that 7 days will start going back from today, which... See more...
When you bin by time, the first part of the time specifier indicates the size of the bin, e.g. span=1w You are looking to make buckets of 7 days - that 7 days will start going back from today, which here today is March 7, so counting back will give you 29 Feb 22 Feb 15 Feb 8 Feb However, when you use the @ specifier, this tells Splunk to "snap to" a specific time start point within the bucket, so when you do span=1w@w it tells Splunk to use a 7 day time bucket, but to snap to the beginning of the week, which is a Sunday. You can change this behaviour of the 'start of the week' by adding a relative indicator after that snap to value, so span=1w@w1 will tell Splunk to snap to the Monday as start of the week. When you do any kind of bucket creation with bin or span=1w in a timechart, it effectively changes the _time of all events to be the start of the bin - it has to do this because if Student 1 and 2 have different dates within the same week for their first grade, what date could it use for the single event it produces? So, when you see Student 1 showing as the week of the 4th Feb, it simply means that Student1 was seen at some time in that 7 day bucket. See this doc, which has a section on the snap to characteristics. https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/SearchTimeModifiers  
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Eve... See more...
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Every month like on 10 , 18, 25 and during 8am to 11am i don't want to get the alerts. Rest all for other days its should work as normal. how can i do it???
If you run timechart across a 24 hour window and you specify @d as the time bucket, it will count by the day,  so say you run the search at 10:00 am, it will give you the 24 hours window of yesterday... See more...
If you run timechart across a 24 hour window and you specify @d as the time bucket, it will count by the day,  so say you run the search at 10:00 am, it will give you the 24 hours window of yesterday from 10:00am to midnight and today from midnight to 10am. Currently your query will give you  yesterday_time env1_count env2_count ... total today_time env1_count env2_count ... total what is your intention to show this information - is time relevant? If you are just looking for a sum of (b) for each env, then just use stats, e.g. | stats sum(b) as sum by env | transpose 0 header_field=env  
Worked like a charm, thank you !