All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In my research you need 3 values and your table only as 2.  Heat maps are representing change of value over interval of time by category.   | timechart span=<interval> max(<value>) as perc by <fiel... See more...
In my research you need 3 values and your table only as 2.  Heat maps are representing change of value over interval of time by category.   | timechart span=<interval> max(<value>) as perc by <field_name>
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for resta... See more...
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for restart, earlier it never used to ask for restart during first time, only when we upgrade the app it used to ask for restart. Also I'm not using input.conf file in my add-on 
Hi @Punnu  If you wanted a count of the unique messageID after filtering then a simple stats count should do, as we've already stats by messageID | stats count   Did this answer help you? If ... See more...
Hi @Punnu  If you wanted a count of the unique messageID after filtering then a simple stats count should do, as we've already stats by messageID | stats count   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @karn  I would have said clone the input, but you've done that? So the name of the input is definately different than the original? The reason I ask is that looking at the code, the checkpoint na... See more...
Hi @karn  I would have said clone the input, but you've done that? So the name of the input is definately different than the original? The reason I ask is that looking at the code, the checkpoint name is created based on the input name.  The two checkpoint files that the Generic S3 Input creates (key/index ckpt) are stored in the checkpoint directory (typically $SPLUNK_HOME/var/lib/splunk/modinputs/aws_s3) - Can you check in there to see what you have in there? You could stop Splunk and clear the relevant modinput checkpoint files and then start Splunk again. If that doesnt work then check you _internal logs for any errors, or more info about which data it is pulling in when the input runs.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
I change interval time to 600s also.
Hi @tech_g706  Those searches are accelerating the Data Models, presumably you are using Splunk Enterprise security? I think the first thing to check is are you actually using all of those models fo... See more...
Hi @tech_g706  Those searches are accelerating the Data Models, presumably you are using Splunk Enterprise security? I think the first thing to check is are you actually using all of those models for your ES rules/searches? Secondly I would check that you have set the specific required indexes in the allowed index list for each of your Data Models in the "CIM Setup" section of ES, by default these are set to index=* but should be configured to only access the indexes that contain the relevant data for the particular data model.  Check out these docs for more information on managing data models in ES. The last thing I would check is the Data Model audit dashboard (Audit > Data Model Audit) in ES, this should give you some stats on how the DM are behaving and if they are updating correctly.  You can also check out https://docs.splunk.com/Documentation/ES/8.0.2/Install/ConfigureDatamodels#Data_model_acceleration_rebuild_behavior which has some further details on the configuration options such as the summary period for each data model. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable ... See more...
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable it but no data coming. I tried clone and changed only the name. No data coming also. I don't know how to check it. Is it checkpoint issue or somethings? Please help me to check it. Thanks for Advance
Hi @tech_g706 , it isn't possible to optimize accelerated scheduled searches, you can only reduce the execution frequency, if this is compatible with your requisites. E.G. if you schedule accelerat... See more...
Hi @tech_g706 , it isn't possible to optimize accelerated scheduled searches, you can only reduce the execution frequency, if this is compatible with your requisites. E.G. if you schedule acceleration searches every 10 or 15 minutes, instead of 5, you will have avalible data later than now, so you must change the execution time window of your Correlation Searches. In other words, if having a frequency of 5 minutes, you can use a time period from -10m@m to -5m@m  , having a frequency of 15 minutes, you must schedule Correlation Searches from -20m@m to -15m@m, is this acceptable for you? Otherwise, you have to use summariesonly=false, but you lose in performances. Ciao. Giuseppe
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. So... See more...
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. Sometimes two or three belongs to one app. Approx they have 200 config IDs and they want to restrict users from not seeing other config ID logs. So they are asking to create 200 indexes with config id in index name and can restrict based on that. But according to my knowledge...having more indexes is not a good idea. It needs more maintainance and stuff like that. So what am thinking is while configuring data input I can name with config accordingly so that it will come under 'Source' field and a single index for all of them. When creating role I will be assigning that index and in restrictions I will be giving search filter that belongs to individual user. My question is will this work as expected? Anyone already following this please confirm. Even if we restrict A user with common index=X and Source=123456 (config ID) and save it... If he give index=A in search still he can see all config ID logs or only 123456 ID logs? Please confirm. Any other alternative idea also please help me.
To offer a more precise answer to this question for anyone referencing it in the future. First, download the Splunk package to your local system. Next, open the command prompt, PowerShell, or any te... See more...
To offer a more precise answer to this question for anyone referencing it in the future. First, download the Splunk package to your local system. Next, open the command prompt, PowerShell, or any terminal you prefer. Use the 'cd' command to navigate to the directory where the Splunk package is located. After reaching the correct directory, execute the following command: cd .\Downloads\ For Linux and macOS: sha256sum splunk*.tgz   For Windows (PowerShell): Get-FileHash -Algorithm SHA256 splunk*.tgz  
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using S... See more...
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using Splunk ES, SA-NetworkProtection apps.  By CMC, these are the most expensive ones and take around 30-40 minutes to complete.  _ACCELERATE_DM_Splunk_SA_CIM_Authentication_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Network_Traffic_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Vulnerabilities_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Endpoint.Services_ACCELERATE _ACCELERATE_DM_Splunk_SA_CIM_Network_Sessions_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Change_ACCELERATE_ _ACCELERATE_DM_SA-NetworkProtection_Domain_Analysis_ACCELERATE_ _ACCELERATE_DM_DA-ESS-ThreatIntelligence_Threat_Intelligence_ACCELERATE_ Any recommendations on how I can optimize without disabling them? Thank you
I'd also like to know. Other tools have this functionality. The existing splunk implementation with javascript is really complicated
@ITWhisperer   Can you please help me in this topic.   
hi @livehybrid  Thank you for reply . I would like to ask one more question . Post filtering  out records how we can find count of messageID 
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ... See more...
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ( TERM(VVF006H) OR TERM(VVF003H) OR TERM(VVZJ1BH) OR TERM(VVZJ1CH) OR TERM(VVZJ1DH) OR TERM(VVZJ1EH) OR TERM(HVVZK3A) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") [| makeresults | addinfo | eval earliest=relative_time(info_min_time,"-17h@d+17h") | eval latest=relative_time(earliest,"+24h") | table earliest latest]
Hi @msatish  Yes - A service account can be used in the same way any other user, infact I always recommend that knowledge objects *should* be owned by a service account because otherwise if owned by... See more...
Hi @msatish  Yes - A service account can be used in the same way any other user, infact I always recommend that knowledge objects *should* be owned by a service account because otherwise if owned by a user which leaves the organisation then the knowledge objects could become orphaned - or they could accidentally be deleted. If using SAML (for example) with Authentication Extensions enabled then users will be automatically updated based on groups/roles in the Identity Provider - so if they leave their account will be deleted. If they move teams then they may have more/less permissions than they used to.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
@ITWhisperer  Thanks, its working fine when we are analyzing the current day (Yesterday 5 Pm to today 5 PM).  Is it possible to replace now () with the time provided by the Input time panel.  i.e ... See more...
@ITWhisperer  Thanks, its working fine when we are analyzing the current day (Yesterday 5 Pm to today 5 PM).  Is it possible to replace now () with the time provided by the Input time panel.  i.e ----if i select today in the Input time panel, it will consider the start of day as 5 PM of today  ----if i select yesterday in the Input time panel, it will consider the start of day as 5 PM of yesterday and end of day as 5 PM of today  ----if i select 31/03/2025 in the Input time panel, it will consider the start of day as 5 PM of 31/03/2025 and end of day as 5 PM of 01/04/2025    
Can service account be used as owner of knowledge objects(saved searches, transforms-lookups, props-extracts, macros, and views)?Please share pros and cons.
Hi @Praz_123  You could try a rest call: | rest /services/cluster/manager/health This returns a number of interesting fields around SF/RF. eturned values Name Datatype Description all_data_i... See more...
Hi @Praz_123  You could try a rest call: | rest /services/cluster/manager/health This returns a number of interesting fields around SF/RF. eturned values Name Datatype Description all_data_is_searchable Boolean Indicates if all data in the cluster is searchable. all_peers_are_up Boolean Indicate if all peers are strictly in the Up status. cm_version_is_compatible Boolean Indicates if any cluster peers are running a Splunk Enterprise version greater than or equal to the cluster manager's version. multisite Boolean Indicates if multisite is enabled. no_fixups_in_progress Boolean Indicates if there does not exist buckets with bucket state NonStreamingTarget, or bucket search states PendingSearchable or SearchablePendingMask. pre_flight_check Boolean Indicates if the health check prior to a rolling upgrade was successful. This value is true only if the cluster passed all health checks. replication_factor_met Boolean Only valid for mode=manager and multisite=false. Indicates whether the replication factor is met. If true, the cluster has at least replication_factor number of raw data copies in the cluster. search_factor_met Boolean Only valid for mode=manager and multisite=false. Indicates whether the search factor is met. If true, the cluster has at least search_factor number of raw data copies in the cluster. site_replication_factor_met Boolean Only valid for mode=manager and multisite=true. Indicates whether the site replication factor is met. If true, the cluster has at least replication_factor number of raw data copies in the cluster. site_search_factor_met Boolean Only valid for mode=manager and multisite=true. Indicates whether the site search factor is met. If true, the cluster has at least site_search_factor number of raw data copies in the cluster. splunk_version_peer_count String Lists the number of cluster peers running each Splunk Enterprise version. Check out the docs at https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTcluster#cluster.2Fmanager.2Fhealth for more info on all the fields. You could also check: | rest /services/cluster/manager/info active_bundle Provides information about the active bundle for this manager. bundle_creation_time_on_manager The time, in epoch seconds, when the bundle was created on the manager. bundle_validation_errors_on_manager A list of bundle validation errors. bundle_validation_in_progress Indicates if bundle validation is in progress. bundle_validation_on_manager_succeeded Indicates whether the manager succeeded validating bundles. data_safety_buckets_to_fix Lists the buckets to fix for the completion of data safety. gen_commit_buckets_to_fix The buckets to be fixed before the next generation can be committed. indexing_ready_flag Indicates if the cluster is ready for indexing. initialized_flag Indicates if the cluster is initialized. label The name for the manager. Displayed in the Splunk Web manager page. latest_bundle The most recent information reflecting any changes made to the manager-apps configuration bundle. In steady state, this is equal to active_bundle. If it is not equal, then pushing the latest bundle to all peers is in process (or needs to be started). maintenance_mode Indicates if the cluster is in maintenance mode. reload_bundle_issued Indicates if the bundle issued is being reloaded. rep_count_buckets_to_fix Number of buckets to fix on peers. rolling_restart_flag Indicates whether the manager is restarting the peers in a cluster. search_count_buckets_to_fix Number of buckets to fix to satisfy the search count. service_ready_flag Indicates whether the manager is ready to begin servicing, based on whether it is initialized. start_time Timestamp corresponding to the creation of the manager.   If you want specific fix-up info check out https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTcluster#cluster.2Fmanager.2Ffixup   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Since you only want to consider your day to start at the previous 5pm, you could try adjusting your search earliest time appropriately index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=... See more...
Since you only want to consider your day to start at the previous 5pm, you could try adjusting your search earliest time appropriately index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ( TERM(VVF006H) OR TERM(VVF003H) OR TERM(VVZJ1BH) OR TERM(VVZJ1CH) OR TERM(VVZJ1DH) OR TERM(VVZJ1EH) OR TERM(HVVZK3A) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") [| makeresults | eval earliest=relative_time(now(),"-17h@d+17h")]