All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you using the default acceleration parameters? If so, can you try having Max Concurrent Searches as 4 (instead of 3), Max Summarization Search time as 15 Mins (instead of 60), Lower the backfill ... See more...
Are you using the default acceleration parameters? If so, can you try having Max Concurrent Searches as 4 (instead of 3), Max Summarization Search time as 15 Mins (instead of 60), Lower the backfill range (if you are sure that there are no major historical events we need to take care about). I faced the similar issue for a large (40 TB+ a day) customer, and had to tweak those parameters for Network_Traffic and couple of other Data Models. Reference Doc - https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/use-data-summaries-to-accelerate-searches/accelerate-data-models#ariaid-title10  
Also if possible can you share your datamodels.conf for authentication dm.
@gcusello I was wondering if your summary range is 2 days why your earliest time and latest time have a gap of around 17 months. Also can you run this and check if this is also slow  | tstats sum... See more...
@gcusello I was wondering if your summary range is 2 days why your earliest time and latest time have a gap of around 17 months. Also can you run this and check if this is also slow  | tstats summariesonly=true count from datamodel=Authentication by _time span=1h
Unable to update and save detections after upgrading to Splunk ES version 8.1.0. It says Detection ID is missing.   
Hi @PrewinThomas , thank you for your support: I'm breaking my head from too much time! Data Model Audit dashboard doesn't give any additional information that all the enabled accelerations have to... See more...
Hi @PrewinThomas , thank you for your support: I'm breaking my head from too much time! Data Model Audit dashboard doesn't give any additional information that all the enabled accelerations have too high run_times values. About acceleration summary range: I enabled only two days, infact the DM dimensions are very low. About DM constrains: I used the related macros to search only on the relevant indexes but they are many: e.g. in Authentication DM there are more than 30 Indexes. About High Cardinality fields: I have many of them (as user, src, dest, etc...) but in the Authentication DM they are relevant and always present, so I cannot remove them. I also optimized scheduling. I suppose that I'd search in acceleration parametrs but, at the moment, without luck! Ciao. Giuseppe
Hi, can anybody help with this problem, please? Old Splunk 4 is running on Windows 2016 Srv. The old Splunk 4 should be upgraded to he newest version on a new hardware with Windows 2022 Srv. 1. ho... See more...
Hi, can anybody help with this problem, please? Old Splunk 4 is running on Windows 2016 Srv. The old Splunk 4 should be upgraded to he newest version on a new hardware with Windows 2022 Srv. 1. how to do it 2. how to migrate all data 3. how to use existing licence ????   Sorry, my mistake. The old version is 7.1.2.  
@gcusello  Your resources looks pretty good. Can you check your DM search constraints are using any broad search constraints and too large acceleration summary range enabled? Too many High-Cardina... See more...
@gcusello  Your resources looks pretty good. Can you check your DM search constraints are using any broad search constraints and too large acceleration summary range enabled? Too many High-Cardinality Fields in the DM? Also can you check Data model audit dashboard can provide any further details for this Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi at all, I have an issue on Data Models accelerations: the run times of each accelerations are too high to use DMs in my Correlation Searches: more than 2000 seconds for each run. I have six IDXs... See more...
Hi at all, I have an issue on Data Models accelerations: the run times of each accelerations are too high to use DMs in my Correlation Searches: more than 2000 seconds for each run. I have six IDXs with 24 CPUs (only partially used: less that 50%) and storage with 1500 IOPS, so the infrastructure shouldn't be the issue. Six Indexers should be sufficient to index and search 1TB/day of data, so this shouldn't be the issue. I have around 1 TB/day of data distributed in more than 30 indexes and I listed these indexes in the CIM macro, so this shouldn't be the issue. Where could I search the issue? Now I'm trying with some parameters: I enabled "Poll Buckets For Data To Summarize" and I disabled "Automatic Rebuilds". Is there something else in the DM structure that could be critical? Thank you for your help. Ciao.  Giuseppe
Hi @PotatoDataUser  Unfortunately "Add a comment" does not support field token replacement. See the docs at https://help.splunk.com/en/splunk-it-service-intelligence/splunk-it-service-intelligence/... See more...
Hi @PotatoDataUser  Unfortunately "Add a comment" does not support field token replacement. See the docs at https://help.splunk.com/en/splunk-it-service-intelligence/splunk-it-service-intelligence/detect-and-act-on-notable-events/4.20/event-aggregation/configure-episode-action-rules-in-itsi#:~:text=Does%20not%20accept%20token%20replacement. for more details.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have setup an episode review that is capturing alerts and generating episodes, so now I want to know if I can add comments to the Episode based on conditions, for example splunk-system-user should ... See more...
I have setup an episode review that is capturing alerts and generating episodes, so now I want to know if I can add comments to the Episode based on conditions, for example splunk-system-user should check if the status becomes -pending and add a comment : "The details for this are - (fieldvalue) " for example : if i have a field with name "Version" I want the system to add a comment like : "The details for this are : 1.2.3" I tried adding this in rules. But when i check the comments i see the comments like this Please let me know if you know of any way I can add a field value in the comments. Thanks in advance.
When I use the btool command that you provided me with, what exactly do I look for? Because there is an overwhelming amount of information that is provided when I use that btool command.  I can see ... See more...
When I use the btool command that you provided me with, what exactly do I look for? Because there is an overwhelming amount of information that is provided when I use that btool command.  I can see my peers (indexers) in the Peers tab on the Indexer Clustering page from my cluster manager.  And I have triple checked that I am on the cluster manager, I've often made the same mistake or looking at other hosts hahaha
@Andre_  I can see option to enter Output Name with DbConenct version 4. There might be bug/ui issue with your particular 3.x version, not sure.   Also i saw an option by directly editing sa... See more...
@Andre_  I can see option to enter Output Name with DbConenct version 4. There might be bug/ui issue with your particular 3.x version, not sure.   Also i saw an option by directly editing savedsearches.conf, which i haven't tested. You can try this if you can't upgrade to 4. After saving your alert, add below entry to your .conf with your db output name action.db_output = 1 action.db_output.param.output = output_to_test_table Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Latest 3.x, haven’t updated to 4.0.0 yet (not a fan of 0s)
Your DB Connect version?
The document you linked states in step 5 for creating and alert: “Enter the Output Name. The output name must exist in DB Connect.” I have no option to enter the output name. Says no parameters requ... See more...
The document you linked states in step 5 for creating and alert: “Enter the Output Name. The output name must exist in DB Connect.” I have no option to enter the output name. Says no parameters required.
Hi, yes, all is setup and works well when used manually. I can use SPL to update the database table. i am unable to use the db connect alert action. i have 3 outputs configured in DBX. Now I am se... See more...
Hi, yes, all is setup and works well when used manually. I can use SPL to update the database table. i am unable to use the db connect alert action. i have 3 outputs configured in DBX. Now I am setting up an alert and choose the db connect alert action. It’s not working. And in my mind it can’t because I have no way to tell it what output to use? if someone has an dbx alert configured and could share the config that might clear up my confusion. Kind regards, Andre
@ASGrover  Can you check bundle deployment status on the CM splunk show cluster-bundle-status Verify your indexes.conf is placed correctly Eg: $SPLUNK_HOME/etc/master-apps/<your_app>/local/index... See more...
@ASGrover  Can you check bundle deployment status on the CM splunk show cluster-bundle-status Verify your indexes.conf is placed correctly Eg: $SPLUNK_HOME/etc/master-apps/<your_app>/local/indexes.conf Verify index config is available in the indexer, run this in one of the indexer and verify splunk btool indexes list bmc --debug Does your new index have any data? If not, try with some test data | makeresults | eval foo="bar" | collect index=bmc Also did you find any errors on the CM _internal? Lastly perform a restart on CM as well. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @CyberSamurai , you have two solutions: you can create a lookup (called e.g. perimeter.csv and containing two columns: host and sourcetype) listing all the sourcetypes and hosts to monitor (bewa... See more...
Hi @CyberSamurai , you have two solutions: you can create a lookup (called e.g. perimeter.csv and containing two columns: host and sourcetype) listing all the sourcetypes and hosts to monitor (beware: in the lookup you have to list all the copuple of sourcetype and host to monitor), and the run a search like this: index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count BY sourcetype host | append [ | inputlookup perimeter.csv | eval count=0 | fields sourcetype host count] | stats sum(count) AS total BY sourcetype host | where total=0 otherwise, if you don't want to manage a lookup, you could check the couples of sourcetype and host that were present e.g. in the last 30 days and aren't present in tha last hour, running a search like this: index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats latest(_time) AS _time count BY sourcetype host | where _time<now()-3600 obviously to customize on your situation. Ciao. Giuseppe
@Andre_  Did you create database outputs first? The alert action does not prompt for parameters because it uses the mapping and connection you set up in the DB Connect app’s Outputs. #https://help.... See more...
@Andre_  Did you create database outputs first? The alert action does not prompt for parameters because it uses the mapping and connection you set up in the DB Connect app’s Outputs. #https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/3.18/configure-and-manage-splunk-db-connect/create-and-manage-database-outputs#id_8af48766_8b49_4f27_8138_a2cdf208e86c__Create_a_database_output If you want to test it manually, use | dbxoutput output="output_to_test_table" in your SPL Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@CyberSamurai  Try with lookup Eg: | inputlookup memberservers.csv | rename host as lookup_host | join type=left lookup_host [ | tstats count as totalEvents count(eval(sourcetype=="windows PFirewa... See more...
@CyberSamurai  Try with lookup Eg: | inputlookup memberservers.csv | rename host as lookup_host | join type=left lookup_host [ | tstats count as totalEvents count(eval(sourcetype=="windows PFirewall Log")) as fwCount WHERE index=sw tag=MemberServers BY host | rename host as lookup_host ] | fillnull value=0 totalEvents fwCount | where fwCount=0 | table lookup_host totalEvents fwCount | rename lookup_host as host Also try with tstats | tstats count as totalEvents count(eval(sourcetype=="windows PFirewall Log")) as fwCount WHERE index=sw tag=MemberServers BY host | where fwCount=0 | table host,totalEvents,fwCount Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!