All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Actually that is an interesting train of thought. You could do this to conditionally create a set of "easily not-fulfillable" conditions. Like some non-existent sourcetype being set only on those day... See more...
Actually that is an interesting train of thought. You could do this to conditionally create a set of "easily not-fulfillable" conditions. Like some non-existent sourcetype being set only on those days you don't want the search to run.
This looks like simple | dedup ip If there is some other logic involved, please explain.
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because... See more...
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because the IP could change, in the real data the IPs are a lot more.   The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it should not be applied to unique IP that has "name0" Thank you for your help. Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0       | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0"    
@sainag_splunk  not sure where Id add this source code "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earlie... See more...
@sainag_splunk  not sure where Id add this source code "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, The current dashboard is using saved Reports and so Id imagine we be using ds.savedSearch
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that fil... See more...
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that file. Is there a way to clear/reset the checker?
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 ... See more...
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 but I don't see any known issues about this.
Yep.. true. Its just once in a week. Its like we are utilising the splunk resource for the search but not making use of it. But, it still works the way we wanted it to work I tried to run this co... See more...
Yep.. true. Its just once in a week. Its like we are utilising the splunk resource for the search but not making use of it. But, it still works the way we wanted it to work I tried to run this condition separately in a subsearch to avoid running the entire search, it worked for few days before it stopped working recently, not sure if version upgrade or something caused it. [| makeresults | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0] It would be smooth if there's a way similar to this.
Yes, that's some approach to the problem but while it might not make a big difference for a simple and lightweight search if your search is a big and heavy report you'd still be running it and stress... See more...
Yes, that's some approach to the problem but while it might not make a big difference for a simple and lightweight search if your search is a big and heavy report you'd still be running it and stressing your servers. It's just that you wouldn't get any results back.
@prakashbhanu407 @woodcock  This works too.. maybe you can use for the future requirement, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the... See more...
@prakashbhanu407 @woodcock  This works too.. maybe you can use for the future requirement, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.
@kzkk  You would have found an alternate way but this maybe useful in the future, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search qu... See more...
@kzkk  You would have found an alternate way but this maybe useful in the future, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.
@geninf5 @gcusello  I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and... See more...
@geninf5 @gcusello  I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.
note: in my testing this behavor is specific to bash on linux.
Just to be clear, this is specifically for Splunk SOAR. I would like to delete unused tags on SOAR containers. I do understand that i can go to Administration -> Administration Settings -> Tags and m... See more...
Just to be clear, this is specifically for Splunk SOAR. I would like to delete unused tags on SOAR containers. I do understand that i can go to Administration -> Administration Settings -> Tags and manually delete them, but we have thousands and without manually checking each one, I am not sure what its in use. I would like to be able to delete everything that is no longer in use on containers. 
Please elaborate on "it isn't working".  That doesn't give us anything to work with.  Show us what you get so we can offer other suggestions. Use the eval command to add a field to the results table.
Ahh I see what you mean. Never though to use the comment like that and several times. Thank you
Thank you for the response! So, I tried it, and it isn't working, but for more context: I run a stats command for the table, and after that, I run a fillnull to insert the X's into the table.  ... See more...
Thank you for the response! So, I tried it, and it isn't working, but for more context: I run a stats command for the table, and after that, I run a fillnull to insert the X's into the table.  I tried another stats after that, but that didn't work. How would I append the "Total_X's" field to the table?
The foreach command can do that. <<your search>> | eval Total_Xs = 0 | foreach * [| eval Total_Xs=Total_Xs + if('<<FIELD>>'="X", 1, 0)]  
We identified the issue.  Startdate is a timestamp_NTZ (no time zone)  so UTC.  The config was set to Eastern-time zone.  once it was adjusted it worked perfectly.     Simple mis-config.  Took a whil... See more...
We identified the issue.  Startdate is a timestamp_NTZ (no time zone)  so UTC.  The config was set to Eastern-time zone.  once it was adjusted it worked perfectly.     Simple mis-config.  Took a while to identify the issue thought.  thanks for your input.
Missing data makes me immediately think of two things and one is much easier to find and fix. 1) Bad time ingestions index=_introspection | eval latency=abs(_indextime-_time) | table _time _inde... See more...
Missing data makes me immediately think of two things and one is much easier to find and fix. 1) Bad time ingestions index=_introspection | eval latency=abs(_indextime-_time) | table _time _indextime latency | sort - latency | head 15 Try sorting both - (descending) and + (increasing), this will help point out anything that is ingesting with bad time formatting causing the data to appear as missing. 2) Skipping events You would need to dig through your HF internal logging to check for full queues or max transmit violations.     Give that a start.
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of ... See more...
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of the table.  How would I go about doing that?   Here is an example table, and thank you!   Field1 | Field2 | Field3 | Field4 | Field5 | Total_Xs X | X | Foo | Bar | X | 3 Foo2 | X | Foo | Bar | X | 2 X | X | X | Bar | X | 4