All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@R15  For monitoring Stanzas, it's still pretty much the same. However, many new type of inputs exists too (modular, scripted, HEC etc...), who do not rely on the fishbucket.
Hi @PickleRick  When I ran the following command, the dc returned 6 for each row     | eventstats dc(name) as dc     dc ip location name 6 1.1.1.1 location-1 name0 6 1.1.1.1 l... See more...
Hi @PickleRick  When I ran the following command, the dc returned 6 for each row     | eventstats dc(name) as dc     dc ip location name 6 1.1.1.1 location-1 name0 6 1.1.1.1 location-1 name1 6 1.1.1.2 location-2 name2 6 1.1.1.2 location-20 name0 6 1.1.1.3 location-3 name0 6 1.1.1.3 location-3 name3 6 1.1.1.4 location-4 name4 6 1.1.1.4 location-4 name4b 6 1.1.1.5 location-0 name0 6 1.1.1.6 location-0 name0 So, the output are still missing1.1.1.5 and 1.1.1.6   Only name0 that exists on multiple rows should be removed.     Thanks for your help     | where name!="name0" OR (name=="name0" AND dc=1)     output dc ip location name 6 1.1.1.1 location-1 name1 6 1.1.1.2 location-2 name2 6 1.1.1.3 location-3 name3 6 1.1.1.4 location-4 name4 6 1.1.1.4 location-4 name4b Expected output: ip location name 1.1.1.1 location-1 name1 1.1.1.2 location-2 name2 1.1.1.3 location-3 name3 1.1.1.4 location-4 name4 1.1.1.4 location-4 name4b 1.1.1.5 location-0 name0 1.1.1.6 location-0 name0
I had this same issue. I built an ansible playbook that needed to run a python script. I got this error when running: /opt/splunk/bin/python script.py What fixed it: /opt/splunk/bin/splunk cmd pytho... See more...
I had this same issue. I built an ansible playbook that needed to run a python script. I got this error when running: /opt/splunk/bin/python script.py What fixed it: /opt/splunk/bin/splunk cmd python script.py Not sure if you're having the same problem but for some reason the request module doesn't load right of handle ssl if you do /opt/splunk/bin/python but DOES work correctly if you use /opt/splunk/bin/splunk cmd python. Hope it helps!
is ended up solving a major problem for me using Ansible to setup Splunk and running python scripts inside the Ansible playbook. Thank you so much!
This list has aged quite a bit, is it still accurate? @yannK 
You can help yourself and check how many distinct values are there in the name field. | eventstats dc(name) as dc | where name!="name0" OR (name=="name0" AND dc=1) Then you can | dedup name f ne... See more...
You can help yourself and check how many distinct values are there in the name field. | eventstats dc(name) as dc | where name!="name0" OR (name=="name0" AND dc=1) Then you can | dedup name f needed
Hi @PickleRick  Sorry I missed another condition. I also updated the initial post. The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it shou... See more...
Hi @PickleRick  Sorry I missed another condition. I also updated the initial post. The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it should not be applied to unique IP that has "name0" So,  unique IP like 1.1.1.5 and 1.1.1.6 that has "name0" needs to be remained in the data.   What I did now is to filter out statically, but another IP could show up with the same pattern.   Thank you again for your help Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0   Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0   | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0"    
Then just filter out all events with name="name0" | where name!="name0" or even | search name!="name0" Then you can dedup if needed.
Hi @PickleRick  Thank you for your help. I also updated the original post. The name0 is not in order. The dedup/filter should not be applied  to IP that doesn't contain "name0" Data: ip ... See more...
Hi @PickleRick  Thank you for your help. I also updated the original post. The name0 is not in order. The dedup/filter should not be applied  to IP that doesn't contain "name0" Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4"  
Hello, my deployment server shows 11 errors, however the query doesn't return any results and I have selected all time. Where would I go from here?
Actually that is an interesting train of thought. You could do this to conditionally create a set of "easily not-fulfillable" conditions. Like some non-existent sourcetype being set only on those day... See more...
Actually that is an interesting train of thought. You could do this to conditionally create a set of "easily not-fulfillable" conditions. Like some non-existent sourcetype being set only on those days you don't want the search to run.
This looks like simple | dedup ip If there is some other logic involved, please explain.
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because... See more...
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because the IP could change, in the real data the IPs are a lot more.   The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it should not be applied to unique IP that has "name0" Thank you for your help. Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0       | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0"    
@sainag_splunk  not sure where Id add this source code "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earlie... See more...
@sainag_splunk  not sure where Id add this source code "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, The current dashboard is using saved Reports and so Id imagine we be using ds.savedSearch
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that fil... See more...
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that file. Is there a way to clear/reset the checker?
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 ... See more...
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 but I don't see any known issues about this.
Yep.. true. Its just once in a week. Its like we are utilising the splunk resource for the search but not making use of it. But, it still works the way we wanted it to work I tried to run this co... See more...
Yep.. true. Its just once in a week. Its like we are utilising the splunk resource for the search but not making use of it. But, it still works the way we wanted it to work I tried to run this condition separately in a subsearch to avoid running the entire search, it worked for few days before it stopped working recently, not sure if version upgrade or something caused it. [| makeresults | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0] It would be smooth if there's a way similar to this.
Yes, that's some approach to the problem but while it might not make a big difference for a simple and lightweight search if your search is a big and heavy report you'd still be running it and stress... See more...
Yes, that's some approach to the problem but while it might not make a big difference for a simple and lightweight search if your search is a big and heavy report you'd still be running it and stressing your servers. It's just that you wouldn't get any results back.
@prakashbhanu407 @woodcock  This works too.. maybe you can use for the future requirement, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the... See more...
@prakashbhanu407 @woodcock  This works too.. maybe you can use for the future requirement, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.
@kzkk  You would have found an alternate way but this maybe useful in the future, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search qu... See more...
@kzkk  You would have found an alternate way but this maybe useful in the future, I had a similar requirement, and I solved it using a combination of a cron schedule and a condition in the search query. It's just two steps, first to setup a weekly schedule and then a condition to return result only once every two weeks. Set up weekly cron schedule. For example, to run at 6 p.m.  on every Sunday, use: 0 18 * * 0 Add the following condition to your search query, placing it where the query runs efficiently without affecting the final output: | eval biweekly_cycle_start=1726977600, biweekly=round(((relative_time(now(),"@d")-biweekly_cycle_start)/86400),0)%14 | where biweekly=0 In this example, I introduced a reference epoch time, biweekly_cycle_start, to calculate the two-week cycle. It represents the epoch time for two weeks before the alert schedule's starting date. For instance, if your schedule begins on October 6, 2024, use the epoch time for the start of the day, September 22, 2024, which is 1726977600. Each time the alert runs, the condition checks whether two weeks have passed since the last run. It returns results every two weeks and no results on the off week (seven days from the previous run). Simply insert this condition where it will optimize the search performance, before the final transforming commands like stats, top, table, etc.