All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going the... See more...
Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going there is there anywhere this log are store so that we can fetch. I need to created a query which shows the status of SF, RF and searchable in Cluster Master also if there are any fixup pending.
@isoutamo  is it possible to correct my splunk query to fetch the status of the application as below :  Status of Application : This needs to be extracted using the query attached below:  Planne... See more...
@isoutamo  is it possible to correct my splunk query to fetch the status of the application as below :  Status of Application : This needs to be extracted using the query attached below:  Planned : If current time is less than the expected time of JOB1  OK-Running :  If Current time is between the expected time of JOB1 and expected time of JOB5 + Status of all the JOBs is either OK  or PLANNED KO-FAILED : if Current time is between the expected time of JOB1 and expected time of JOB5 + Status of any the 1 JOBs is either KO. 
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan... See more...
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan:firewall_cloud.All our dashboards are set up expecting traffic logs under pan:traffic and threat logs under pan:threat.   Having checked the props.conf and transforms.conf for sourcetype=pan:firewall_cloud, there is no rule to route the logs to pan:threat or pan:traffic. how is everyone dealing with this situation ? appreciate any workarounds or suggestions in general. This seems to be big issue anyone using SLS (strata logging service).Thanks.  
Csv file is attached 
Splunk query + Csv file + real output data is attached.     
Thanks Ismo for your quick reply.  I've attached the splunk query , csv file and the output. Can you please let me know how can i use those values and _time from indexed data from ran job's log.   
Hi isoutamo, Thank you for the tips regarding the CMC to get the macro ! I tested your query and it is working  well ! Thank you for this, I will review it to fully understand it
Hello @livehybrid  - It's hard to tell as initially there were proxy issues b/w my org's N/W and splunk cloud but I guess we fixed that and hence was able to access /services/apps/local. For other... See more...
Hello @livehybrid  - It's hard to tell as initially there were proxy issues b/w my org's N/W and splunk cloud but I guess we fixed that and hence was able to access /services/apps/local. For other endpoints like /services/search/jobs and /services/server/info - I see traces on splunk cloud in it's internal access logs as if the requests are reaching to Splunk server but not sure if Splunk server is not returning the response on time OR on Splunk side , the response stream is stuck b/w its web server and any other layer before it otherwise why would access log has 200 response for API call and I get connection timeout ?  About the python code to invoke Splunk , here it is -  import time import os import logging import splunklib.client as client import splunklib.results as results from splunklib.binding import HTTPError from dotenv import load_dotenv from datetime import datetime # Configure logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) splunklib_logger = logging.getLogger("splunklib") splunklib_logger.setLevel(logging.DEBUG) logger = logging.getLogger(__name__) class SplunkSearchClient: def __init__(self, host, port, username, password, retries=3, retry_delay=2): """ Initializes the Splunk client. :param host: Splunk Cloud host :param port: Splunk management port (default 8089) :param username: Splunk username :param password: Splunk password :param retries: Number of retries for API failures :param retry_delay: Delay between retries """ self.host = host self.port = port self.username = username self.password = password self.retries = retries self.retry_delay = retry_delay self.service = self._connect_to_splunk() @staticmethod def _convert_to_iso8601(time_str): """ Converts a time string from 'yyyy-MM-dd HH:MM:SS' format to ISO8601 ('yyyy-MM-ddTHH:MM:SS'). :param time_str: Time string in 'yyyy-MM-dd HH:MM:SS' format. :return: Time string in ISO8601 format. """ dt = datetime.strptime(time_str, '%Y-%m-%d %H:%M:%S') return dt.isoformat() def _connect_to_splunk(self): """ Establishes a connection to Splunk without retry logic. """ try: service = client.connect( host=self.host, port=self.port, username=self.username, password=self.password, scheme="https", basic=True ) return service except HTTPError as e: logger.error(f" Connection failed: {e}") raise def trigger_search(self, query, start_time, end_time): """ Submits a search job to Splunk. :param query: SPL search query. :param start_time: Start time in 'yyyy-MM-dd HH:MM:SS' format. :param end_time: End time in 'yyyy-MM-dd HH:MM:SS' format. :return: Splunk job object. """ # Convert to ISO8601 format for safety iso_start = self._convert_to_iso8601(start_time) iso_end = self._convert_to_iso8601(end_time) try: job = self.service.jobs.create(query, earliest_time=iso_start, latest_time=iso_end,timeout=60) print(f" Search job triggered successfully (Job ID: {job.sid})") return job except HTTPError as e: print(f" Failed to create search job: {e}") raise def wait_for_completion(self, job): """ Waits for a Splunk search job to complete. :param job: Splunk search job object """ logger.info(" Waiting for job completion...") while not job.is_done(): time.sleep(2) job.refresh() logger.info(" Search job completed!") def fetch_results(self, job): """ Fetches results from a completed Splunk search job. :param job: Splunk search job object :return: List of result dictionaries """ try: reader = results.ResultsReader(job.results()) output = [dict(result) for result in reader if isinstance(result, dict)] logger.info(f" Retrieved {len(output)} results") return output except HTTPError as e: logger.error(f" Error fetching results: {e}") raise def run_search(self, query, earliest_time="-15m", latest_time="now"): """ Runs a full search workflow: triggers job, waits for completion, fetches results. :param query: SPL search query :param earliest_time: Time range start :param latest_time: Time range end :return: List of results """ job = self.trigger_search(query, earliest_time, latest_time) self.wait_for_completion(job) results = self.fetch_results(job) job.cancel() # Clean up the job return results # Example Usage if __name__ == "__main__": load_dotenv() splunk_client = SplunkSearchClient( host=os.getenv('SPLUNK_CLOUD_HOST'), port=int(os.getenv('SPLUNK_CLOUD_PORT', '8089')), username=os.getenv('SPLUNK_USERNAME'), password=os.getenv('SPLUNK_PASSWORD') ) query = "search index=_internal | stats count by sourcetype" start_time = "2025-04-02 09:30:00" end_time = "2025-04-04 12:30:00" results = splunk_client.run_search(query,earliest_time=start_time, latest_time=end_time) for row in results: logger.info(row) It fails in trigger_search method call while calling create method of Jobs object.  
Hi @4SplunkUser  The installation docs can be found at https://docs.splunk.com/Documentation/AddOns/released/VMWvcenterlogs/InstallOverview This details the various places that the app should be in... See more...
Hi @4SplunkUser  The installation docs can be found at https://docs.splunk.com/Documentation/AddOns/released/VMWvcenterlogs/InstallOverview This details the various places that the app should be installed depending on your configuration / architecture.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Punnu  To achieve an inner join effect and only keep results where messageID exists in both searches, you can filter the results after your stats command to remove rows where request_time is nul... See more...
Hi @Punnu  To achieve an inner join effect and only keep results where messageID exists in both searches, you can filter the results after your stats command to remove rows where request_time is null (meaning the messageID only existed in the second search).  Add | where isnotnull(request_time) after your stats command.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tawm_12 , The simplest method is often configuring your applications within the containers to log to stdout/stderr and then using the Docker Splunk logging driver to forward these logs directly ... See more...
Hi @tawm_12 , The simplest method is often configuring your applications within the containers to log to stdout/stderr and then using the Docker Splunk logging driver to forward these logs directly to your Splunk Cloud HEC endpoint. If your applications must log to files within the container filesystem, you can use a Universal Forwarder (UF) sidecar container. Method 1: Docker Logging Driver (Recommended if apps log to stdout/stderr) Configure your application inside the Docker container to write its logs to standard output (stdout) and standard error (stderr). This is a common practice for containerized applications. Configure the Docker daemon or individual containers to use the splunk logging driver, pointing it to your Splunk Cloud HEC endpoint and token. Example docker run command: docker run \ --log-driver=splunk \ --log-opt splunk-token= \ --log-opt splunk-url=https://:8088 \ --log-opt splunk-format=json \ --log-opt splunk-verify-connection=false \ # Add other options like splunk-sourcetype, splunk-index, tag, etc. your-application-image This method leverages Docker's built-in logging capabilities. The driver captures the container's stdout/stderr streams (which contain your application logs if configured correctly) and forwards them via HEC. Method 2: Universal Forwarder Sidecar (If apps log to files) Deploy a Splunk Universal Forwarder container alongside your application container. Mount the volume containing the application log files into both the application container (for writing) and the UF container (for reading). Configure the UF container's inputs.conf to monitor the log files within the mounted volume. Configure the UF container's outputs.conf to forward data to your Splunk Cloud HEC endpoint or an intermediate Heavy Forwarder. Using HEC output from the UF is generally preferred for Splunk Cloud. Example UF inputs.conf: [monitor:///path/to/mounted/logs/app.log] sourcetype = your_app_sourcetype index = your_app_index disabled = false Example UF outputs.conf (for HEC): [httpout] uri = https://:8088 hecToken = # Consider sslVerifyServerCert = true in production after cert setup sslVerifyServerCert = false useACK = true [tcpout:splunk_cloud_forwarder] server = : # Use if forwarding via UF->HF->Splunk Cloud S2S # Other S2S settings... # disabled = true # Disable if using httpout The UF actively monitors the specified log files and forwards new events. This is suitable when applications cannot log to stdout/stderr. The UF sidecar runs in parallel with your app container, sharing the log volume. The Docker logging driver does* send application logs if the application logs are directed to the container's stdout/stderr. The approach involving a separate Splunk Enterprise container solely for forwarding is overly complex and not typically recommended. A UF can forward directly or via a standard Heavy Forwarder infrastructure. If you are running in Kubernetes, consider using Splunk Connect for Kubernetes, which streamlines log collection using the OpenTelemetry Collector. Using HEC for sending data to Splunk Cloud. See Splunk Lantern: Getting Data In - Best Practices for Getting Data into Splunk Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @viren1990  This does sound an odd situation, as you say if one of the endpoints works then I would expect the others too aswell. Would you be able to share some of the Python code you are using ... See more...
Hi @viren1990  This does sound an odd situation, as you say if one of the endpoints works then I would expect the others too aswell. Would you be able to share some of the Python code you are using for the connection? The other thing that comes to mind is if there is a firewall / proxy server between your server and your outbound connection to the internet? If so there is a chance that this is letting the first request through but the others are blocked.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If all you want is to remove those extra messageID's, you can simply remove those with null request_time, like | search request_time = *
In addition to everybody's speculations, the biggest problem in the SPL in my opinion is that the whole search will only return one field: User; the entire exercise/homework is to simply restrict whi... See more...
In addition to everybody's speculations, the biggest problem in the SPL in my opinion is that the whole search will only return one field: User; the entire exercise/homework is to simply restrict which User values are allowed.  No inner join or stats is needed for this task because plain old subsearch is designed for this. There are a million ways to do this.  Given the original SPL lavishes dedup with rest command outputs, I assume that 12k_line.csv is the largest dataset.  So, I am using that as the lead search. (Any command can be used as lead search; corresponding subsearches justed need to be adjusted.) | inputlookup 12k_line.csv where [rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User] [rest /servicesNS/-/-/directory | fields author | dedup author | rename author AS User ] | fields User  
If I read your context correctly, you want to use values of "name" in parameters as key, and those of "value" as value, like the following based on your sample data. storedProcedureName Document... See more...
If I read your context correctly, you want to use values of "name" in parameters as key, and those of "value" as value, like the following based on your sample data. storedProcedureName DocumentFileTypeId DocumentId DocumentTypeId DocumentVersionId IncludeInactive RETURN_VALUE DocumentFileTypeGetById 7         0 DocumentAttributeGetByDocumentTypeId     00   false 0 DocumentDetailGetByParentId   000000       0 DocumentStatusHistoryGetByFK       000000   0 DocumentVersionGetByFK   000000       0 DocumentLinkGetByFK   000000       0 DocumentGetById   000000       0 DocumentFileTypeGetById 7         0 DocumentStatusHistoryGetByFK       000000   0 DocumentVersionGetByFK   000000       0 DocumentLinkGetByFK   000000       0 DocumentGetById   000000       0 Here, I preserved storedProcedureName as reference.  Also note that when you sanitize sample data, any fake value with multiple zeros (0s) must be quoted in order to be valid JSON. To return the above, use JSON functions introduced in 8.1: | eval kvparams = json_object() | foreach parameters mode=json_array [eval kvparams = json_set(kvparams, json_extract(<<ITEM>>, "name"), json_extract(<<ITEM>>, "value"))] | spath input=kvparams | rename @* as * Here is a full emulation using the 12 events (with corrected JSON syntax) for you to play with and compare with real data. | makeresults format=json data="[ {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentFileTypeGetById\",\"commandText\":\"ref.DocumentFileTypeGetById\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentFileTypeId\",\"value\":7}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentAttributeGetByDocumentTypeId\",\"commandText\":\"ref.DocumentAttributeGetByDocumentTypeId\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentTypeId\",\"value\":\"00\"},{\"name\":\"@IncludeInactive\",\"value\":false}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentDetailGetByParentId\",\"commandText\":\"ref.DocumentDetailGetByParentId\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentStatusHistoryGetByFK\",\"commandText\":\"ref.DocumentStatusHistoryGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentVersionId\",\"value\":\"000000\"},{\"name\":\"@IncludeInactive\",\"value\":\"\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentVersionGetByFK\",\"commandText\":\"ref.DocumentVersionGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentLinkGetByFK\",\"commandText\":\"ref.DocumentLinkGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8614186-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentGetById\",\"commandText\":\"ref.DocumentGetById\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.8457543-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"Get\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentFileTypeGetById\",\"commandText\":\"ref.DocumentFileTypeGetById\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentFileTypeId\",\"value\":7}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.736377-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain\",\"method\":\"ViewDocument\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"GetLatestDocumentwithoutAttributes\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentStatusHistoryGetByFK\",\"commandText\":\"ref.DocumentStatusHistoryGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentVersionId\",\"value\":\"000000\"},{\"name\":\"@IncludeInactive\",\"value\":\"\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.736377-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain\",\"method\":\"ViewDocument\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"GetLatestDocumentwithoutAttributes\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentVersionGetByFK\",\"commandText\":\"ref.DocumentVersionGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.736377-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain\",\"method\":\"ViewDocument\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"GetLatestDocumentwithoutAttributes\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentLinkGetByFK\",\"commandText\":\"ref.DocumentLinkGetByFK\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.736377-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain\",\"method\":\"ViewDocument\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"GetLatestDocumentwithoutAttributes\"}]}, {\"auditResultSets\":null,\"schema\":\"ref\",\"storedProcedureName\":\"DocumentGetById\",\"commandText\":\"ref.DocumentGetById\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@DocumentId\",\"value\":\"000000\"}],\"serverIPAddress\":\"000.000.000.000\",\"serverHost\":\"Webserver\",\"clientIPAddress\":\"000.000.000.000\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.DocumentManagement\",\"accessDate\":\"2025-03-21T16:37:14.736377-06:00\",\"userId\":\"0000\",\"userName\":\"username\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain\",\"method\":\"ViewDocument\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.DocumentManagement.DocumentManager\",\"method\":\"GetLatestDocumentwithoutAttributes\"}]} ]" | fields parameters storedProcedureName | eval kvparams = json_object() | foreach parameters mode=json_array [eval kvparams = json_set(kvparams, json_extract(<<ITEM>>, "name"), json_extract(<<ITEM>>, "value"))] | spath input=kvparams | rename @* as * | fields - _* parameters kvparams  
@tawm_12  We recently completed an integration for one of our customers using the following links:   https://stackoverflow.com/questions/53287922/how-to-forward-application-logs-to-splunk-from-d... See more...
@tawm_12  We recently completed an integration for one of our customers using the following links:   https://stackoverflow.com/questions/53287922/how-to-forward-application-logs-to-splunk-from-docker-container  https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-logging-driver-for-docker.html?_gl=1*1tdlq7k*_gcl_aw*R0NMLjE3NDM1NDMzNzEuQ2owS0NRanduYTZfQmhDYkFSSXNBTElkMlowWjRydUlmV053Nl92U2xyZllHSzdFNTRVeERsbzVTVGJNd2RCaTBqNVZZcERMNERPLVZ0MGFBc1lHRUFMd193Y0I.*_gcl_au*MTY2NzExOTYxOS4xNzQxOTY4Mjky*FPAU*MTY2NzExOTYxOS4xNzQxOTY4Mjky*_ga*MTExNTc3MzM4LjE3NDE5NjgyOTM.*_ga_5EPM2P39FV*MTc0MzY0ODcxMy4yNC4xLjE3NDM2NTAzMDQuMC4wLjMyMDQzNDc5*_fplc*UHpUbnZQWUtQVktCZDF4NUlEb2hibWs1Mm50ZGk0bGpJMDVaVG82Q2huUUNpYXJmRFo4WCUyQk5xeU1IYmE4UzNqM1M0SEx6bG8waFN6S1ZGN2dPenI4dDhZNGltbzRoVVNVMVNteDdEbG5EY29XazhJMzc1S3piNmdsbTFDa2clM0QlM0Q.&locale=en_us 
@4SplunkUser  You need to install the Splunk Add-on for vCenter Logs (specifically the Splunk_TA_vcenter package) on your search head if you want the search-time field extractions to work correctly.... See more...
@4SplunkUser  You need to install the Splunk Add-on for vCenter Logs (specifically the Splunk_TA_vcenter package) on your search head if you want the search-time field extractions to work correctly. This ensures that when you search the vCenter log data in Splunk, the fields (e.g., event types, timestamps, etc.) are properly parsed and displayed.   You can install the Splunk Add-on for vCenter Logs (Splunk_TA_vcenter) on a Heavy Forwarder (HF), and in some cases, it makes a lot of sense depending on your Splunk architecture.    The add-on has both index-time (e.g., line breaking, timestamp recognition) and search-time (e.g., field extractions) components. Installing it on the HF ensures index-time processing happens there, which can reduce load on indexers. However, you’ll still need it on the search head for search-time fields.   I can see that the add-on is capable of parsing data for the following sourcetypes:-   vmware:vclog:vpxd vmware:vclog:vpxd-alert vmware:vclog:vpxd-profiler vmware:vclog:vws vmware:vclog:cim-diag vmware:vclog:stats Ingest vCenter Logs to Splunk:- Configure ESXi/vCenter to send logs to a syslog receiver (UF/HF). Use the Splunk Add-on on that receiver to parse those logs. Ensure the add-on is also installed on the HF/Search Head as per your environment.  NOTE: Ensure that your logs align with the expected sourcetypes defined in the props.conf and transforms.conf configurations.  
Splunk Add-on for vCenter Logs does not have anything under the installation tab. Do we just need to install it on the serch head for the vCenter logs to be interpreted correctly or is it something ... See more...
Splunk Add-on for vCenter Logs does not have anything under the installation tab. Do we just need to install it on the serch head for the vCenter logs to be interpreted correctly or is it something that can be used to get the log into Splunk via API calls? Better documentation would be great as it is a Splunk supported app.