All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@geninf5 @gcusello  I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and... See more...
@geninf5 @gcusello  I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.
note: in my testing this behavor is specific to bash on linux.
Just to be clear, this is specifically for Splunk SOAR. I would like to delete unused tags on SOAR containers. I do understand that i can go to Administration -> Administration Settings -> Tags and m... See more...
Just to be clear, this is specifically for Splunk SOAR. I would like to delete unused tags on SOAR containers. I do understand that i can go to Administration -> Administration Settings -> Tags and manually delete them, but we have thousands and without manually checking each one, I am not sure what its in use. I would like to be able to delete everything that is no longer in use on containers. 
Please elaborate on "it isn't working".  That doesn't give us anything to work with.  Show us what you get so we can offer other suggestions. Use the eval command to add a field to the results table.
Ahh I see what you mean. Never though to use the comment like that and several times. Thank you
Thank you for the response! So, I tried it, and it isn't working, but for more context: I run a stats command for the table, and after that, I run a fillnull to insert the X's into the table.  ... See more...
Thank you for the response! So, I tried it, and it isn't working, but for more context: I run a stats command for the table, and after that, I run a fillnull to insert the X's into the table.  I tried another stats after that, but that didn't work. How would I append the "Total_X's" field to the table?
The foreach command can do that. <<your search>> | eval Total_Xs = 0 | foreach * [| eval Total_Xs=Total_Xs + if('<<FIELD>>'="X", 1, 0)]  
We identified the issue.  Startdate is a timestamp_NTZ (no time zone)  so UTC.  The config was set to Eastern-time zone.  once it was adjusted it worked perfectly.     Simple mis-config.  Took a whil... See more...
We identified the issue.  Startdate is a timestamp_NTZ (no time zone)  so UTC.  The config was set to Eastern-time zone.  once it was adjusted it worked perfectly.     Simple mis-config.  Took a while to identify the issue thought.  thanks for your input.
Missing data makes me immediately think of two things and one is much easier to find and fix. 1) Bad time ingestions index=_introspection | eval latency=abs(_indextime-_time) | table _time _inde... See more...
Missing data makes me immediately think of two things and one is much easier to find and fix. 1) Bad time ingestions index=_introspection | eval latency=abs(_indextime-_time) | table _time _indextime latency | sort - latency | head 15 Try sorting both - (descending) and + (increasing), this will help point out anything that is ingesting with bad time formatting causing the data to appear as missing. 2) Skipping events You would need to dig through your HF internal logging to check for full queues or max transmit violations.     Give that a start.
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of ... See more...
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of the table.  How would I go about doing that?   Here is an example table, and thank you!   Field1 | Field2 | Field3 | Field4 | Field5 | Total_Xs X | X | Foo | Bar | X | 3 Foo2 | X | Foo | Bar | X | 2 X | X | X | Bar | X | 4    
Hello.   the field in my search is "file_name" while the field in the lookup is called "phrase", i tried to use this but it didnt work: | lookup my_lookup.csv phrase OUTPUT file_name AS fou... See more...
Hello.   the field in my search is "file_name" while the field in the lookup is called "phrase", i tried to use this but it didnt work: | lookup my_lookup.csv phrase OUTPUT file_name AS found_key
I tend to just use the by clause as the first mentioned field is used for the over field, but that's just a matter of style / preference. Since you can only specify two fields on a chart command, ove... See more...
I tend to just use the by clause as the first mentioned field is used for the over field, but that's just a matter of style / preference. Since you can only specify two fields on a chart command, over and by is probably clearer. In this example, the eventstats is a way of providing a single value for the "over" field so that you get a single row of statistics.
Compare the contents of the "splunk/etc/apps/splunk-rolling-upgrade" against the manifest file.  Some previous files may be left behind after the upgrade and triggering the python 2 warning.  If any ... See more...
Compare the contents of the "splunk/etc/apps/splunk-rolling-upgrade" against the manifest file.  Some previous files may be left behind after the upgrade and triggering the python 2 warning.  If any have the same name you may need to manually copy from the v9 Splunk extracted to a safe location.
Alternatively you could just to a very ugly hack and name your field so it will be iterated over at the end and knowing it will be added to itself just divide it in half
Aha!  That's how you avoid the double-count.  I've encountered this before.  Will definitely put this on file.  Thank you.
Thank you for your feedback. We appreciate your engagement and aim to provide the best assistance possible based on the information shared in the community forum. The query provided was our best eff... See more...
Thank you for your feedback. We appreciate your engagement and aim to provide the best assistance possible based on the information shared in the community forum. The query provided was our best effort to address your question using the details you offered. Community members, including myself, volunteer our time and knowledge to help fellow Splunk users. If you need more tailored assistance or have specific requirements not covered in the initial query, there are a couple of options: You can provide more detailed information about your data structure and exact requirements in the forum. This would help us refine the solution further. For more in-depth, real-time support where you can share your screen and get personalized guidance, Splunk offers ondemand services. These allow for "shoulder surfing" and can address your specific needs more directly.  Regarding the query itself, you can adapt the latter part to your specific environment as mentioned by @yuanliu  or me.  | stats count as total_servers, count(eval(status_field="YourCompletedStatus")) as completed_count, count(eval(status_field="YourPendingStatus")) as pending_count | eval completed_percentage = round(completed_count / total_servers * 100, 0) | eval pending_percentage = round(pending_count / total_servers * 100, 0) | eval "Completed Servers" = completed_count . " (" . completed_percentage . "%)" | eval "Pending Servers" = pending_count . " (" . pending_percentage . "%)" | fields "Completed Servers", "Pending Servers" Replace index, source, sourcetype, status_field, YourCompletedStatus, and YourPendingStatus with your specific values. This should work with your actual data structure. We're here to help, and we hope this guidance proves useful for your specific use case. Thanks.
Oh @ITWhisperer That is sweet. It helps me if I specify the over / by designation ... but hats off to you. | eventstats count as Total | chart count over Total by UpgradeStatus Not looking forward... See more...
Oh @ITWhisperer That is sweet. It helps me if I specify the over / by designation ... but hats off to you. | eventstats count as Total | chart count over Total by UpgradeStatus Not looking forward to explaining to anyone how it works ... but work it does.  (I'll add it to my collection of misusing a command to get a result.   Got any others? )
Right you are. As usual, forgot about it. Here's fixed version. | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed ... See more...
Right you are. As usual, forgot about it. Here's fixed version. | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | transpose 0 header_field=UpgradeStatus | fields - column | eval Total=0 | foreach * [ eval Total=Total+if("<<FIELD>>"=="Total",0,<<FIELD>>) ]
Hi @jroedel , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Nice work @PickleRick !   Am I missing something?  I had previously tried with the delightful foreach command earlier but I can't get it to avoid double-counting the Total field.  So I end up with t... See more...
Nice work @PickleRick !   Am I missing something?  I had previously tried with the delightful foreach command earlier but I can't get it to avoid double-counting the Total field.  So I end up with the Total equalling 22 - not the correct result of 11.  I get the same thing with your solution as well.