All Topics

Top

All Topics

  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor ... See more...
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor these files as if the data is live, so that I can apply Splunk Enterprise Security (ES) rules to them.
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registe... See more...
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registered in All other traffic instead of First Class BT. I have attached the Two Screenshot for your reference(last 1 hour & last 1 week BT list) Please let me know the appropriate answer/solution to my aforementioned question. Thanks & Regards, Satishkumar
If facing issue after migrating single site to multisite indexer cluster. SF/RF not met after 5 days still fixup task increasing. Can any help to resolve this SF/RF issue ? 
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the foll... See more...
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the following error while trying to save: Argument validation for scheme=nutanix_alerts: script running failed (PID 24107 killed by signal 9: Killed).  
Hi Team "Could you please let us know when the latest version of the Splunk OVA for VMware will be released?"
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the... See more...
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the traffic during the time window compared to Crowd strike last user login reported for same endpoint. Has anyone you know faced similar issue ?   Thanks 
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct ... See more...
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct [tools:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\d{2}:\d{2}:\d{2}.\d{3}\s\| TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d | %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=28 Sample logs:   2022-02-22 | 04:00:34:909 | main stream logs | Staticapp-1 - Restart completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | Staticapp-1 - application logs (total=0, active=0, waiting=0) completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | harikpool logs-1 - mainframe script (total=0, active=0, waiting=0) completed      
Step-by-Step Guide to Deploying AppDynamics Smart Agent Using Ansible on Linux Systems This article will guide you through installing the AppDynamics Smart Agent on a Linux system using Ansible. ... See more...
Step-by-Step Guide to Deploying AppDynamics Smart Agent Using Ansible on Linux Systems This article will guide you through installing the AppDynamics Smart Agent on a Linux system using Ansible. It covers downloading, configuring, and starting the Smart Agent in an automated fashion. This setup ensures that the Smart Agent is correctly configured for your environment. Prerequisites: Ansible Installed: Make sure Ansible is installed on the machine where you are running the playbook. Sudo Privileges: The playbook requires sudo (root) privileges to execute tasks. Download URL: You need a valid download URL for the AppDynamics Smart Agent. You can get this from the AppDynamics download site (replace the provided URL with your own). Steps to Set Up the Ansible Playbook 1. Directory Structure Create a directory structure for the Ansible playbook as follows: 2. Inventory File Define your target machine (localhost in this case) in the inventory file: [appd_agents] localhost ansible_connection=local 3. Ansible Playbook ( playbook.yml ) The main playbook references the smart_agent role. Ensure become: true is set to allow privilege escalation for necessary tasks: --- - hosts: appd_agents become: true roles: - smart_agent 4. Role Tasks ( roles/smart_agent/tasks/main.yml ) The tasks in this role will include downloading, unarchiving, configuring, and starting the Smart Agent. - name: Download AppDynamics Smart Agent using curl command: > curl -L -O -H "Authorization: Bearer <YOUR_AUTH_TOKEN>" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/<version>/appdsmartagent_64_linux_<version>.zip" args: chdir: /tmp - name: Unarchive the Smart Agent zip unarchive: src: /tmp/appdsmartagent_64_linux_<version>.zip dest: /opt/appdynamics/ remote_src: yes - name: Configure Smart Agent in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'ControllerURL\s*=\s*.*' replace: 'ControllerURL=https://xxxxx.saas.appdynamics.com' - name: Set ControllerPort in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'ControllerPort\s*=\s*.*' replace: 'ControllerPort=443' - name: Set FMServicePort in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'FMServicePort\s*=\s*.*' replace: 'FMServicePort=443' - name: Set AccountAccessKey in config.ini replace: path: /opt/appdynamics/config.ini regexp: '^AccountAccessKey\s*=\s*.*' replace: 'AccountAccessKey=<YOUR_ACCOUNT_ACCESS_KEY>' - name: Ensure AccountName is set in the main section of config.ini lineinfile: path: /opt/appdynamics/config.ini regexp: '^AccountName\s*=' line: 'AccountName=xxxxx' insertafter: '^ControllerPort\s*=.*' - name: Enable SSL in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'EnableSSL\s*=\s*.*' replace: 'EnableSSL=true' - name: Start Smart Agent shell: /opt/appdynamics/smartagentctl start --service > /tmp/log.log 2>&1 become: yes register: output 5. Replace the Download URL and other controller parameter Replace the download URL placeholder with your own Smart Agent download URL from the AppDynamics download site. In the command task for downloading the Smart Agent, replace <YOUR_AUTH_TOKEN> with your AppDynamics authentication token and replace <version> with the appropriate version for your Smart Agent. For example: - name: Download AppDynamics Smart Agent using curl command: > curl -L -O -H "Authorization: Bearer YOUR_AUTH_TOKEN" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/24.8.0.551/appdsmartagent_64_linux_24.8.0.551.zip" args: chdir: /tmp Replace Controller URL, Controller Port, AccessKey and AccountName with your credentials 6. Running the Playbook To run the playbook, execute the following command: ansible-playbook -i inventory playbook.yml Conclusion This Ansible playbook simplifies downloading, configuring, and running the AppDynamics Smart Agent on a Linux system. Make sure to replace the download URL and account details with your specific values, and you’ll have the agent up and running in no time.
I would like to clean up the messaging I'm sending to Slack for splunk alerts.  I've tried markdown [text](http://url) which doesn't work and renders the text exactly as displayed here.  I've also ... See more...
I would like to clean up the messaging I'm sending to Slack for splunk alerts.  I've tried markdown [text](http://url) which doesn't work and renders the text exactly as displayed here.  I've also tried <text|http://url> which renders verbatim also.  Is there anyway to have slack hide URLs behind text like a normal hyperlink?  My alerts look really awful with huge links back to slack searches and dashboards.  TYIA
Implementing and Managing Auto Instrumentation with AppDynamics Cluster Agent The Cluster agent uses the init-container approach to instrument apps based on the rule you wish for. It can be used to... See more...
Implementing and Managing Auto Instrumentation with AppDynamics Cluster Agent The Cluster agent uses the init-container approach to instrument apps based on the rule you wish for. It can be used to specifically target apps that belong to a namespace, contain a specific label, or can be for a specific deployment or container name. Again, the Cluster agent can also be tweaked to automatically push one of the 3 APM agents i.e. Java, .NET Core, or Node.JS APM agents. Looking at the requirements for the Cluster agent, there are not any details on how much the resource requirement is if the Cluster agent auto instrumentation is enabled. This is because of the way the auto instrumentation is done.  Technically speaking one single instance of the Cluster agent is capable of instrumenting an infinite number of deployments but in general as mentioned (in the AppDynamics Doc link above), for every 100 pods 50 MB of memory and 100 Milicores of CPU is required by the Cluster Agent. Steps involved in auto instrumentation: The Cluster agent deployed on env and begins checking deployments/statefulsets and replicasets which confirmed to the instrumentation rules configured. The Cluster agent modifies deployments/statefulset and sets them to pending status, and adds the init container and the env variables required for the agent to connect to the controller. The rollout of these modified deployments or statefulset happens and the Cluster agent in general does 2 depth checks on the new created pods to ensure auto instrumentation is complete. First depth check to check for agent binary in /opt/appdynamics-java or /opt/appdynamics-nodejs or /opt/appdynamics-dotnetcore folder. Second depth check to see the APM agent node logs exist - for eg., /opt/appdynamics-java/ver*/logs/. It also does a controller API check the node name exists in UI. If both succeed and the rollout is complete the annotation of the deployment gets updated from pending to successful. If not then it's marked failed and Cluster Agent wont re-target them again.  The init-containers are spawned with a fixed Memory and CPU request and limits cannot be modified - i.e. they are hardcoded. They are hardcoded as the init container lifecycle exists only till the binary is not copied to the new pod. Thus in real life the init container exits before the actual container is loaded and started. Also, since the actual container is always bigger than the init container the scheduler will always schedule pods based on the actual container requirement - which is always supposed to be more than the init container. Having said this the rollout strategy does have a play on the total memory requirement as Cluster agent may create pods which are being spawned with an additional CPU and memory requirement at the start of the pods. This however should be similar to when the app is upgraded as the same rollout strategy comes into play here as well. 
So, I want to create a dashboard for a particular team in my company and they want to add notes to dashboard for everyone on their team to view. Is that possible, and if yes, can you refer me to some... See more...
So, I want to create a dashboard for a particular team in my company and they want to add notes to dashboard for everyone on their team to view. Is that possible, and if yes, can you refer me to something?    Thank you! 
I would like to create a dashboard which would run a search daily to check network traffic against a list of about 18,000 IP address.  We created a lookup table with all the IP addresses and ran it,... See more...
I would like to create a dashboard which would run a search daily to check network traffic against a list of about 18,000 IP address.  We created a lookup table with all the IP addresses and ran it, but the search times out. Then we tried to split the lookup tables into 8 different tables and each table was a panel in our dashboard. A few dashboards will run when we do it this way, but then the rest time out.  An idea we had was to either create a drop down tab to only run the searches when we specify, or create a search that runs one lookup table and then will only start the next search when the other stops.  Is there a simpler way to do this? Ideally it would all be one search but it just seems to be too much for our resources.  
Troubleshooting Agent Registration Failures Due to Unauthorized Errors in Observability Systems There can be a multitude of reasons why an agent is unable to register with the controller but if the... See more...
Troubleshooting Agent Registration Failures Due to Unauthorized Errors in Observability Systems There can be a multitude of reasons why an agent is unable to register with the controller but if the agent logs an HTTP 401 or Unauthorized error in the logs then it can be safely ruled out that the network is the issue as this is usually returned from the controller when the request sent does not fit the allowed criteria for the controller. Unauthorized errors happen when an agent tries to register with improper credentials or the controller has a scope on the rule due to which the agent registration is denied. The 4 distinct params which help the agent to connect to the controller are: Controller host name Controller port Access key Account name. If the controller is TLS enabled then the flag ssl-enabled is crucial and so does the port change.  Each agent takes these params differently, so please refer to the documentation on how to configure the agent.  Python Java Machine agent .NET However, some agents require a different way Cluster agent - first create the secret in the appdynamics namespace  kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key='<access-key>' --from-literal=api-user=‘<username@account:password>’ If the above is not done then the CA yaml must have these details populated. If the configuration on the agent side is not the issue then this can be an issue on the controller license i.e. the accesskey you have used. Please ensure that the access key has valid units of a license, as well as there, is no Application or Server scope that is blocking the registration from happening between the agent and controller. A quick way to test is to create a new license from the default and ensure just 1 unit of license is given to that rule. Then use this as the accesskey and try registering the agent. If it succeeds the license rule used before was the culprit. If these still don't help please raise a case with support and provide screenshots of the agent configuration, the logs from the agent end, and a screenshot of the license rule which shows the available units and scopes.
Resolving Issues with Missing Hardware and Custom Metrics in Server Visibility Agents SIM machine agents a.k.a. Server Visibility agents are used to publish Hardware metrics for the underlying no... See more...
Resolving Issues with Missing Hardware and Custom Metrics in Server Visibility Agents SIM machine agents a.k.a. Server Visibility agents are used to publish Hardware metrics for the underlying node or Servers that host applications. One SIM Machine Agent may correlate to multiple APM agents. Also, SIM machine agents may host custom extensions that use the Machine agent to piggyback custom metrics to the controller. In these scenarios, the metrics play a pivotal role in Application to Infrastructure correlation or custom app monitoring via custom extensions. A loss in the metrics from the Machine Agent would mean an actual loss of monitoring and thereby cause an actual revenue loss if not detected and remediated in time. The loss of metrics can be for various reasons such as exceeding the metric limits of the agent or the controller, loss in connectivity, loss in the machine agent process from being devoted to CPU cycles, or issues with memory optimization of the Machine agent. However this article is specifically when the Machine agent is working on the server but we see the non-metrics i.e. total vCPU count, Total memory, and other details such as server tags which are not metrics i.e. variables over time; but not the Hardware or custom metrics which can be reported by the agent.  Metric limits being hit Metric limits exist both at the controller and at the agent side. By default, MA can publish a max of 450 metrics which may not be sufficient if there are a lot of volumes, networks, or process classes configured for the Machine Agent. Luckily agent side metrics can be quickly overridden as mentioned in the docs at https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/administer-app-server-agents/metrics-limits. The controller also has a limit on the account, application and total number of custom metrics which can be registered. If you notice metrics not being registered then one needs to increase the corresponding limit on the controller. The issue with the SIM extension/module being initialized When the MA is started with the SIM flag set to true the MA first tries ensuring a license exists, if it does then the MA registers and then makes some API calls - first for controller server time, second for if the MA is enabled for monitoring in the controller i.e. not disabled. Once these are successful it enables the Servermonitoring extension which is present by default in all Machine agent binary. Now if the Servermonitoring files are corrupt or have an indentation issue you may see a warning in the MA logs: WARN UriConfigProvider - Could not deserialize configuration at file:<MA_HOME>/extensions/ServerMonitoring/conf/ServerMonitoring.yml com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning for the next token found character '\t(TAB)' that cannot start any token. (Do not use \t(TAB) for indentation) . . . at [Source: (byte[])"# WARNING: Before making any changes to this file read the following section carefully # # After editing the file, make sure the file follows the yml syntax. Common issues include # - Using tabs instead of spaces # - File encoding should be UTF-8 # # The safest way to edit this file is to copy paste the examples provided and make the # necessary changes using a plain text editor instead of a WYSIWYG editor. The above example was taken when the servermonitoring yml file was modified using tabs instead of spaces as it's a yml file but the general idea is that the Servermonitoring extension must initialize for data to be sent by it. Having these checked will give one more ideas as to why an incomplete set of metrics were sent to the controller via any Machine agent. 
To start, if you're new to synthetic monitoring, I recommend exploring this synthetic monitoring overview. In today's fast-paced world of web development, browser synthetic testing is a vital tool ... See more...
To start, if you're new to synthetic monitoring, I recommend exploring this synthetic monitoring overview. In today's fast-paced world of web development, browser synthetic testing is a vital tool in the observability toolbox. With technical debt piling up faster than code reviews, it's essential to have a strategy in place that gives you confidence in your application's health. As someone who’s been responsible for delivering high-quality, high-confidence observability services across several Fortune 500 companies, I can confidently say: I slept better knowing our critical client-facing applications had synthetic monitoring coverage.  Synthetic monitoring isn’t just a buzzword; it’s an active safety net that ensures your site’s availability, performance, and functionality are up to par, even when no one’s watching. When your synthetic browser monitoring spots a problem, especially on mission-critical apps, you know it’s a real problem that needs to get fixed fast. As an observability practitioner, I've spent countless hours in the trenches, implementing & tuning synthetic browser monitoring to catch potential issues early. These tools work in harmony with passive monitoring approaches like RUM and APM. Over time, I’ve gathered insights and best practices that I’d like to share with you—especially if you’re looking to fine-tune your synthetic browser tests for success. 1. Power in Simplicity   Enterprise web applications are stuffed with features, but not all of them are equally important. For an e-commerce site, the critical functions include the landing page, login, product search, and payment. While features like live chat, profile customization, and social sharing are cool, they won’t break the business if they’re down for a while. Draft a Short User Journey Start by working with someone who knows the ins and outs of the app—maybe the end-user or product manager. Together, map out the user journey they typically take, noting prerequisites (like login before checkout) and success criteria (e.g., seeing “Welcome, John Doe” after logging in).  If possible, record this session to reference while building your synthetic tests. You’ll also want to group certain steps into transactions—this helps you logically organize the journey, generate transactional KPI’s, and focus on key points in the flow. Using transactions lets you focus on the relevant things the user does, not the minutiae of what page was loaded when. Focus on Critical Actions When designing your tests, stay focused. It’s tempting to test everything, but remember: just because you can doesn’t mean you should. Stick to the critical actions you identified earlier, like logging in, searching, and making purchases.  Finally, focus on underlying service invocation —you want your synthetic tests to complement passive monitoring strategies like APM, not replace functional testing. Ideally, your synthetic testing will also exercise key backend components. You’d do this by making sure to build user workflows that reflect all the critical functionality your site has to offer. As a bonus, in Splunk Synthetic Monitoring, you can even see directly how user transactions affect your backend services from each synthetics run. 2. Validate User Actions A synthetic test is only as good as its validation. Ensuring each user action produces the expected result is critical. Confirm Actions Have the Intended Result Leverage assertions to validate that content renders correctly. For instance, testing a search function at Splunk T-shirt co? Instead of searching for “Supernova limited Edition T-Shirt,” invoke a search for “shirt”. In this example, when the limited edition shirt is no longer available the test would fail and need to be updated, however it utilizing the search term “shirt” would always return results.  Build Robust Tests to Avoid False Positives False positives are the bane of synthetic tests. A robust strategy can minimize them, such as using fuzzy searches instead of exact terms, like mentioned above. You can also implement assertions to wait for elements to load before interacting with them, preventing a race condition from failing your test. Also, be mindful of things like maintenance/downtime windows, as tests run during these times may result in false positives. And, of course, test in the same environment configuration that your users will experience—this includes matching viewport size, cookies, and browser settings. 3. Test Hygiene Matters Keeping your tests organized is just as important as the tests themselves. Follow a Naming Convention A consistent naming convention makes it easier to manage your tests. Use names that clearly reflect the application, environment, or action being tested. For instance: `Application XYZ Checkout_Success_Prod_Test`. Add Custom Properties (Tags) Tagging tests with custom properties—like `app_name`, `environment` , support_level, or `component`—helps with test isolation and troubleshooting. You can use these tags to quickly filter your tests in order to isolate issues or you might leverage them in your alerts to provide additional context to responders. Align these tags with your organization's existing tagging standards for maximum clarity. Use Consistent Testing Locations Ensure your tests are run from consistent locations to get reliable, comparable results. Make sure those locations are aligned with your users’ geography—whether internal or external. Leverage variables  Many robust Synthetic monitoring solutions provide a means to variablize parameters utilized in your test configuration. Leveraging variables allows for centralized management of variables. A common use case would be to store a regularly leveraged username and password in variables. In this example, if you need to update the username or password, you now simply update the variables instead of searching through your synthetics test configuration and updating the tests one-by-one.     4. Act! Alert! Alert! Once your tests are in place, you need a reliable alerting strategy. High-confidence alerting is crucial to respond quickly when an issue arises. The biggest advantage of synthetic monitoring is the knowledge that a failure is generally highly-actionable (assuming you’ve followed the rules earlier in this article to generate good tests.) Build Detectors for KPI Thresholds Set up detectors to alert on KPI thresholds, such as availability and performance. For example: Start by monitoring on page availability, server errors (HTTP status codes > 500), or deviations of the KPI for your synthetic browser test transactions (I.E. search is slow to return results).  Start Simple with Availability Alerts Begin with basic availability alerts to ensure your site is online. Some common availability alerts include checks for SSL certificate validity and server connection errors (status codes > 500). Once your tests are stable, evolve your alerting thresholds to react to seasonal or historical anomalies for critical performance metrics, rather than relying on static thresholds. A great place to start with performance alerts is at the transaction level, for example, generating alerts if the time it takes to render search results or authenticate a user deviates from the historic baseline.  And remember: document everything! Knowing what your test does and why will help your team respond appropriately when alerts trigger. Analyze and Optimize Synthetic browser tests are capable of generating vast amounts of data. Common synthetics metrics include page performance timings, page web vitals (CLS, LCP, TBT), page Connection timings, page resource and error counts, score metrics (such as Lighthouse), page content size metrics, and transaction-level metrics (duration, requests, & total size) This data can be extremely valuable for troubleshooting failures and/or identifying optimization opportunities . For example:  Regularly review your Core Web Vitals and determine if they differ from industry standards. Implement optimizations/improvements and leverage the data to determine the effect. Integrate your deployment pipelines with your synthetic testing data. This will allow you to overlay application deployments and quickly correlate changes with availability issues and/or deviations to performance metrics. Develop common synthetics test dashboards that help analyze your synthetic data over time. Reference these dashboards in your alerts, as this analysis should expedite the response process and ultimately reduce MTTR.   Conclusion Synthetic browser tests are an invaluable part of the modern observability stack, but only if you approach them with care and strategy. Keep your focus on critical user journeys, validate thoroughly, maintain solid test hygiene, and always stay alert (literally). By applying these tips, you’ll not only enhance your synthetic testing framework but also provide more robust observability coverage for your applications. If you’re interested in content similar to this, I’d encourage you to check out the Observability Developer Evangelist blogs and/or our YouTube Playlist.   
Step-by-Step Guide to Setting Up AppDynamics Smart Agent on Windows Systems Go to the AppDynamics Downloads area in Accounts On the Agent tab, under the area "Type"  Select "Agent Manage... See more...
Step-by-Step Guide to Setting Up AppDynamics Smart Agent on Windows Systems Go to the AppDynamics Downloads area in Accounts On the Agent tab, under the area "Type"  Select "Agent Management" and then download the AppDynamics Smart Agent for Windows For Windows, you require Administrator access to start Smart Agent. Therefore, you cannot start Smart Agent as a process in Windows Once you have downloaded the Smart Agent on Windows box, unzip the content The unzipped content will have the below files as of version 24.8.0 of AppDynamics Smart Agent You need to edit the config.ini file, specifically the below section: ControllerURL=<Your-AppDynamics-Controller-Url> ControllerPort=<Your-AppDynamics-Controller-port> FMServicePort=<Your-AppDynamics-Controller-port> AgentType = <Let-this-be-null> AccountAccessKey=<Your-AppDynamics-Controller-accessKey> AccountName=<Your-AppDynamics-Controller-Account-name> EnableSSL= <True-If-SSL-Is-Enabled-Else-False> Once this is edited, open the CMD prompt go the directory where SmartAgent is downloaded, and run: smartagentctl start --service​ Once this is done, SmartAgent should be installed as a service named “appdsmartagent”. You can confirm this from TaskManager.  Now, if you go to the AppDynamics Controller UI, under Agent Management -> Smart Agent, you will be able to see Smart Agent installed.
I am trying to create use cases and searching the indexes but i get index search not found error message. All my logs are not showing up anywhere
Hi guys, Is there any documentation available out there to setup the Cisco Security Cloud app? Specific requirements, "failed to create an input" and similar errors etc. Qzy
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 ... See more...
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 Querying endpoint '/?_schema' to validate credentials Connectivity test succeeded Exception Occurred. 'str' object has no attribute 'formate'. Traceback (most recent call last): File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 349, in _make_rest_call content_type = request_obj.headers[consts.INFOBLOX_JSON_CONTENT_TYPE] File "/opt/phantom/data/usr/python39/lib/python3.9/site-packages/requests/structures.py", line 52, in __getitem__ return self._store[key.lower()][1] KeyError: 'content-type' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "lib3/phantom/base_connector.py/base_connector.py", line 3204, in _handle_action File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 1173, in finalize return self._logout() File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 444, in _logout status, response = self._make_rest_call(consts.INFOBLOX_LOGOUT, action_result) File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 357, in _make_rest_call self.debug_print("{}. {}".formate(message, error_message)) AttributeError: 'str' object has no attribute 'formate' Connectivity test succeeded
Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/... See more...
Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Forwarding/Forwarddatatothird-partysystemsd So far i have configured the forwarder to forward testing.log (should be splunkd.log later) to the foreign syslog target     #inputs.conf [monitor:///opt/splunk/var/log/splunk/testing.log] disabled=false sourcetype=testing         #outputs.conf [tcpout] defaultGroup=idx-cluster indexAndForward=false [tcpout:idx-cluster] server=splunk-idx-cluster-indexer-service:9997 [syslog:my_syslog_group] server = my-syslog-server.foo:514       #transforms.conf [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group     So far so good, testing.log appears on the syslog server but not just that, all other messages are forwarded too. Question: How can i configure the (heavy) forwarder to only send testing.log to the foreign syslog server and how can i make sure that testing.log does not getting indexed? In other words - testing.log should only be send to syslog. Many thanks in advance