All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer   Can you please help me in this topic.   
hi @livehybrid  Thank you for reply . I would like to ask one more question . Post filtering  out records how we can find count of messageID 
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ... See more...
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ( TERM(VVF006H) OR TERM(VVF003H) OR TERM(VVZJ1BH) OR TERM(VVZJ1CH) OR TERM(VVZJ1DH) OR TERM(VVZJ1EH) OR TERM(HVVZK3A) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") [| makeresults | addinfo | eval earliest=relative_time(info_min_time,"-17h@d+17h") | eval latest=relative_time(earliest,"+24h") | table earliest latest]
Hi @msatish  Yes - A service account can be used in the same way any other user, infact I always recommend that knowledge objects *should* be owned by a service account because otherwise if owned by... See more...
Hi @msatish  Yes - A service account can be used in the same way any other user, infact I always recommend that knowledge objects *should* be owned by a service account because otherwise if owned by a user which leaves the organisation then the knowledge objects could become orphaned - or they could accidentally be deleted. If using SAML (for example) with Authentication Extensions enabled then users will be automatically updated based on groups/roles in the Identity Provider - so if they leave their account will be deleted. If they move teams then they may have more/less permissions than they used to.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
@ITWhisperer  Thanks, its working fine when we are analyzing the current day (Yesterday 5 Pm to today 5 PM).  Is it possible to replace now () with the time provided by the Input time panel.  i.e ... See more...
@ITWhisperer  Thanks, its working fine when we are analyzing the current day (Yesterday 5 Pm to today 5 PM).  Is it possible to replace now () with the time provided by the Input time panel.  i.e ----if i select today in the Input time panel, it will consider the start of day as 5 PM of today  ----if i select yesterday in the Input time panel, it will consider the start of day as 5 PM of yesterday and end of day as 5 PM of today  ----if i select 31/03/2025 in the Input time panel, it will consider the start of day as 5 PM of 31/03/2025 and end of day as 5 PM of 01/04/2025    
Can service account be used as owner of knowledge objects(saved searches, transforms-lookups, props-extracts, macros, and views)?Please share pros and cons.
Hi @Praz_123  You could try a rest call: | rest /services/cluster/manager/health This returns a number of interesting fields around SF/RF. eturned values Name Datatype Description all_data_i... See more...
Hi @Praz_123  You could try a rest call: | rest /services/cluster/manager/health This returns a number of interesting fields around SF/RF. eturned values Name Datatype Description all_data_is_searchable Boolean Indicates if all data in the cluster is searchable. all_peers_are_up Boolean Indicate if all peers are strictly in the Up status. cm_version_is_compatible Boolean Indicates if any cluster peers are running a Splunk Enterprise version greater than or equal to the cluster manager's version. multisite Boolean Indicates if multisite is enabled. no_fixups_in_progress Boolean Indicates if there does not exist buckets with bucket state NonStreamingTarget, or bucket search states PendingSearchable or SearchablePendingMask. pre_flight_check Boolean Indicates if the health check prior to a rolling upgrade was successful. This value is true only if the cluster passed all health checks. replication_factor_met Boolean Only valid for mode=manager and multisite=false. Indicates whether the replication factor is met. If true, the cluster has at least replication_factor number of raw data copies in the cluster. search_factor_met Boolean Only valid for mode=manager and multisite=false. Indicates whether the search factor is met. If true, the cluster has at least search_factor number of raw data copies in the cluster. site_replication_factor_met Boolean Only valid for mode=manager and multisite=true. Indicates whether the site replication factor is met. If true, the cluster has at least replication_factor number of raw data copies in the cluster. site_search_factor_met Boolean Only valid for mode=manager and multisite=true. Indicates whether the site search factor is met. If true, the cluster has at least site_search_factor number of raw data copies in the cluster. splunk_version_peer_count String Lists the number of cluster peers running each Splunk Enterprise version. Check out the docs at https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTcluster#cluster.2Fmanager.2Fhealth for more info on all the fields. You could also check: | rest /services/cluster/manager/info active_bundle Provides information about the active bundle for this manager. bundle_creation_time_on_manager The time, in epoch seconds, when the bundle was created on the manager. bundle_validation_errors_on_manager A list of bundle validation errors. bundle_validation_in_progress Indicates if bundle validation is in progress. bundle_validation_on_manager_succeeded Indicates whether the manager succeeded validating bundles. data_safety_buckets_to_fix Lists the buckets to fix for the completion of data safety. gen_commit_buckets_to_fix The buckets to be fixed before the next generation can be committed. indexing_ready_flag Indicates if the cluster is ready for indexing. initialized_flag Indicates if the cluster is initialized. label The name for the manager. Displayed in the Splunk Web manager page. latest_bundle The most recent information reflecting any changes made to the manager-apps configuration bundle. In steady state, this is equal to active_bundle. If it is not equal, then pushing the latest bundle to all peers is in process (or needs to be started). maintenance_mode Indicates if the cluster is in maintenance mode. reload_bundle_issued Indicates if the bundle issued is being reloaded. rep_count_buckets_to_fix Number of buckets to fix on peers. rolling_restart_flag Indicates whether the manager is restarting the peers in a cluster. search_count_buckets_to_fix Number of buckets to fix to satisfy the search count. service_ready_flag Indicates whether the manager is ready to begin servicing, based on whether it is initialized. start_time Timestamp corresponding to the creation of the manager.   If you want specific fix-up info check out https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTcluster#cluster.2Fmanager.2Ffixup   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Since you only want to consider your day to start at the previous 5pm, you could try adjusting your search earliest time appropriately index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=... See more...
Since you only want to consider your day to start at the previous 5pm, you could try adjusting your search earliest time appropriately index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ( TERM(VVF006H) OR TERM(VVF003H) OR TERM(VVZJ1BH) OR TERM(VVZJ1CH) OR TERM(VVZJ1DH) OR TERM(VVZJ1EH) OR TERM(HVVZK3A) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") [| makeresults | eval earliest=relative_time(now(),"-17h@d+17h")]
Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going the... See more...
Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going there is there anywhere this log are store so that we can fetch. I need to created a query which shows the status of SF, RF and searchable in Cluster Master also if there are any fixup pending.
@isoutamo  is it possible to correct my splunk query to fetch the status of the application as below :  Status of Application : This needs to be extracted using the query attached below:  Planne... See more...
@isoutamo  is it possible to correct my splunk query to fetch the status of the application as below :  Status of Application : This needs to be extracted using the query attached below:  Planned : If current time is less than the expected time of JOB1  OK-Running :  If Current time is between the expected time of JOB1 and expected time of JOB5 + Status of all the JOBs is either OK  or PLANNED KO-FAILED : if Current time is between the expected time of JOB1 and expected time of JOB5 + Status of any the 1 JOBs is either KO. 
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan... See more...
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan:firewall_cloud.All our dashboards are set up expecting traffic logs under pan:traffic and threat logs under pan:threat.   Having checked the props.conf and transforms.conf for sourcetype=pan:firewall_cloud, there is no rule to route the logs to pan:threat or pan:traffic. how is everyone dealing with this situation ? appreciate any workarounds or suggestions in general. This seems to be big issue anyone using SLS (strata logging service).Thanks.  
Csv file is attached 
Splunk query + Csv file + real output data is attached.     
Thanks Ismo for your quick reply.  I've attached the splunk query , csv file and the output. Can you please let me know how can i use those values and _time from indexed data from ran job's log.   
Hi isoutamo, Thank you for the tips regarding the CMC to get the macro ! I tested your query and it is working  well ! Thank you for this, I will review it to fully understand it
Hello @livehybrid  - It's hard to tell as initially there were proxy issues b/w my org's N/W and splunk cloud but I guess we fixed that and hence was able to access /services/apps/local. For other... See more...
Hello @livehybrid  - It's hard to tell as initially there were proxy issues b/w my org's N/W and splunk cloud but I guess we fixed that and hence was able to access /services/apps/local. For other endpoints like /services/search/jobs and /services/server/info - I see traces on splunk cloud in it's internal access logs as if the requests are reaching to Splunk server but not sure if Splunk server is not returning the response on time OR on Splunk side , the response stream is stuck b/w its web server and any other layer before it otherwise why would access log has 200 response for API call and I get connection timeout ?  About the python code to invoke Splunk , here it is -  import time import os import logging import splunklib.client as client import splunklib.results as results from splunklib.binding import HTTPError from dotenv import load_dotenv from datetime import datetime # Configure logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) splunklib_logger = logging.getLogger("splunklib") splunklib_logger.setLevel(logging.DEBUG) logger = logging.getLogger(__name__) class SplunkSearchClient: def __init__(self, host, port, username, password, retries=3, retry_delay=2): """ Initializes the Splunk client. :param host: Splunk Cloud host :param port: Splunk management port (default 8089) :param username: Splunk username :param password: Splunk password :param retries: Number of retries for API failures :param retry_delay: Delay between retries """ self.host = host self.port = port self.username = username self.password = password self.retries = retries self.retry_delay = retry_delay self.service = self._connect_to_splunk() @staticmethod def _convert_to_iso8601(time_str): """ Converts a time string from 'yyyy-MM-dd HH:MM:SS' format to ISO8601 ('yyyy-MM-ddTHH:MM:SS'). :param time_str: Time string in 'yyyy-MM-dd HH:MM:SS' format. :return: Time string in ISO8601 format. """ dt = datetime.strptime(time_str, '%Y-%m-%d %H:%M:%S') return dt.isoformat() def _connect_to_splunk(self): """ Establishes a connection to Splunk without retry logic. """ try: service = client.connect( host=self.host, port=self.port, username=self.username, password=self.password, scheme="https", basic=True ) return service except HTTPError as e: logger.error(f" Connection failed: {e}") raise def trigger_search(self, query, start_time, end_time): """ Submits a search job to Splunk. :param query: SPL search query. :param start_time: Start time in 'yyyy-MM-dd HH:MM:SS' format. :param end_time: End time in 'yyyy-MM-dd HH:MM:SS' format. :return: Splunk job object. """ # Convert to ISO8601 format for safety iso_start = self._convert_to_iso8601(start_time) iso_end = self._convert_to_iso8601(end_time) try: job = self.service.jobs.create(query, earliest_time=iso_start, latest_time=iso_end,timeout=60) print(f" Search job triggered successfully (Job ID: {job.sid})") return job except HTTPError as e: print(f" Failed to create search job: {e}") raise def wait_for_completion(self, job): """ Waits for a Splunk search job to complete. :param job: Splunk search job object """ logger.info(" Waiting for job completion...") while not job.is_done(): time.sleep(2) job.refresh() logger.info(" Search job completed!") def fetch_results(self, job): """ Fetches results from a completed Splunk search job. :param job: Splunk search job object :return: List of result dictionaries """ try: reader = results.ResultsReader(job.results()) output = [dict(result) for result in reader if isinstance(result, dict)] logger.info(f" Retrieved {len(output)} results") return output except HTTPError as e: logger.error(f" Error fetching results: {e}") raise def run_search(self, query, earliest_time="-15m", latest_time="now"): """ Runs a full search workflow: triggers job, waits for completion, fetches results. :param query: SPL search query :param earliest_time: Time range start :param latest_time: Time range end :return: List of results """ job = self.trigger_search(query, earliest_time, latest_time) self.wait_for_completion(job) results = self.fetch_results(job) job.cancel() # Clean up the job return results # Example Usage if __name__ == "__main__": load_dotenv() splunk_client = SplunkSearchClient( host=os.getenv('SPLUNK_CLOUD_HOST'), port=int(os.getenv('SPLUNK_CLOUD_PORT', '8089')), username=os.getenv('SPLUNK_USERNAME'), password=os.getenv('SPLUNK_PASSWORD') ) query = "search index=_internal | stats count by sourcetype" start_time = "2025-04-02 09:30:00" end_time = "2025-04-04 12:30:00" results = splunk_client.run_search(query,earliest_time=start_time, latest_time=end_time) for row in results: logger.info(row) It fails in trigger_search method call while calling create method of Jobs object.  
Hi @4SplunkUser  The installation docs can be found at https://docs.splunk.com/Documentation/AddOns/released/VMWvcenterlogs/InstallOverview This details the various places that the app should be in... See more...
Hi @4SplunkUser  The installation docs can be found at https://docs.splunk.com/Documentation/AddOns/released/VMWvcenterlogs/InstallOverview This details the various places that the app should be installed depending on your configuration / architecture.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Punnu  To achieve an inner join effect and only keep results where messageID exists in both searches, you can filter the results after your stats command to remove rows where request_time is nul... See more...
Hi @Punnu  To achieve an inner join effect and only keep results where messageID exists in both searches, you can filter the results after your stats command to remove rows where request_time is null (meaning the messageID only existed in the second search).  Add | where isnotnull(request_time) after your stats command.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing