i got error like There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60...
See more...
i got error like There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60c8764b856899: The system cannot find the path specified.
Hi all, I've got a customer with proprietary logs in their environment and they would like it to be CIM mapped to a data model. The problem is that the logs don't fit any of the data models pre-confi...
See more...
Hi all, I've got a customer with proprietary logs in their environment and they would like it to be CIM mapped to a data model. The problem is that the logs don't fit any of the data models pre-configured for the CIM Mapping add-on, so I assume I will have to create a custom one that fits with their environment Problem is, I have never done this before so would need some advice on how to tackle this. One thing that confuses me about their environment is that their custom logs can have different formats for 1 source, this means that 1 event might produce a log with 32 lines, another with 12 lines etc How would I deal with this?
Hello Splunkers, Please I would like to know if it is possible, at indexer layer, given a HEC input source, to route some incoming data (of course based on regex) to a UDP destination without index...
See more...
Hello Splunkers, Please I would like to know if it is possible, at indexer layer, given a HEC input source, to route some incoming data (of course based on regex) to a UDP destination without indexing that data. Let's say: sourcetype=hecinput, if it contains word "DEBUG" it should go to UDP destination, all the rest should be indexed as usual. I know that maybe I could use INGEST_EVAL, but I think it supports only _TCP_ROUTING. Thanks in advance.
Hello, We are attempting to use Splunk Cloud as a multi-tenant environment (one company, separate entities) in a single subscription. We have in index design that isolates data to each tenant and a...
See more...
Hello, We are attempting to use Splunk Cloud as a multi-tenant environment (one company, separate entities) in a single subscription. We have in index design that isolates data to each tenant and aligns with RBAC assignments. That gets us index-level isolation for data sources that are specific to each tenant. We also stamp all non-splunk events with a tenant code so that role-based access restrictions can be used to filter returned data down to only that which matches your assigned code. This approach allows for event-level filtering from indexes where data is for ALL tenants, such as lastchanceindex. That last set of logs we need to control access to are the underscore indexes. These events are collected based on the inputs.conf files that deploy with the HF and UF agents, which do NOT have our tenant codes associated with them. I was looking for any feedback from the community as to what the downside might be to copying ../etc/system/default/inputs.conf into ../etc/system/local/inputs.conf and adding "meta = tenantcode::<your_tenant_code_goes_here_without_the_angled_brackets>" to each stanza. At that point, all SPLUNK events in the underscore indexes would also contain tenant codes and then we'd be able to achieve item-level filtering at that point. Thanks in advance for any feedback and opinions!
Below is the query which included all the events for windows shutdown and starting up want to exclude host when event 4608 is observed within 5 minutes index =windows product=Windows EventC...
See more...
Below is the query which included all the events for windows shutdown and starting up want to exclude host when event 4608 is observed within 5 minutes index =windows product=Windows EventCode="4609" OR EventCode="4608" OR EventCode="6008" | table _time name host dvc EventCode severity Message please share the query. Thanks
Hello everyone,
I am using the Machine Agent without the bundle version 22.6 as indicated in past releases in docs. When I run java -jar machineagent.jar, I get the error: Could not find the main ...
See more...
Hello everyone,
I am using the Machine Agent without the bundle version 22.6 as indicated in past releases in docs. When I run java -jar machineagent.jar, I get the error: Could not find the main class: com.appdynamics.agent.sim.bootstrap.Bootstrap. Program will exit.
I ran the command jar tf machineagent.jar | grep com/appdynamics/agent/sim/bootstrap/Bootstrap.class, and it shows up, so I don't understand what I'm doing wrong. Can you help me? (doc: https://docs.appdynamics.com/appd/24.x/latest/en/product-and-release-announcements/past-releases/past-agent-releases#id-.PastAgentReleasesv24.2-Version22.6.0-June29,2022)
I've been asked to show how many events are over the p99 (for example) value for a search. I'm hopefully just missing something obvious, but I've not figured out how, or if, this is possible. Thx
In a Classic dashboard, I can add <option name="drilldown">row</option> and it highlights the entire row. My drilldown is not cell-specific, so this is what I want in my Dashboard Studio da...
See more...
In a Classic dashboard, I can add <option name="drilldown">row</option> and it highlights the entire row. My drilldown is not cell-specific, so this is what I want in my Dashboard Studio dashboard. I've tried: "viz_Yy6FIxmM": {
"type": "splunk.table",
"options": {
"tableFormat": {
"align": "> table |pick(alignment)"
},
"drilldown": "row",
... with no change. Thx!
Hi, how to convert UTC time into mmddyy format. I tried this query for search | makeresults | eval time | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table...
See more...
Hi, how to convert UTC time into mmddyy format. I tried this query for search | makeresults | eval time | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table time, readable_time The time format is time: 1717690912746 Thank you
I’m encountering an issue while trying to instrument AppDynamics in a Node.js microservice. Here are the details: Node.js version: v14.21.2 npm version: 8.19.4 Operating system: Linux BBTMF-4AF6...
See more...
I’m encountering an issue while trying to instrument AppDynamics in a Node.js microservice. Here are the details: Node.js version: v14.21.2 npm version: 8.19.4 Operating system: Linux BBTMF-4AF63ZF83 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux AppDynamics version: 23.3.0 The installation process (npm install --legacy-peer-deps) succeeds, but when I run npm run start, I encounter an error message related to appdynamics-libagent-napi. Has anyone else faced a similar issue with this library? Any ideas on how to resolve it? I’m running out of solutions here.
It looks like the last version of this add-on isn't compatiable with due to the add-on builder used, and the app version 1.0.2 has to be used wih SC vs. the latest release of 1.1.1. Can anyone co...
See more...
It looks like the last version of this add-on isn't compatiable with due to the add-on builder used, and the app version 1.0.2 has to be used wih SC vs. the latest release of 1.1.1. Can anyone confirm if there are plans to re-compile the add-on with the latest add-on builder and if the latest app is support on Splunk Cloud?
Hi all, I am attempting to create an add-on with the Add-On Builder that queries a REST API for stats and saves them to a metrics index. A requirement is that it needs to be smart during outages & re...
See more...
Hi all, I am attempting to create an add-on with the Add-On Builder that queries a REST API for stats and saves them to a metrics index. A requirement is that it needs to be smart during outages & recovery. I thought I could just add checkpointing to the add-on. As a test, I stopped Splunk for a few minutes, then started it up and got this: The spike is where Splunk was restarted. What appears to be happening is that the checkpoint is starting where it left off but just doing 1 query from where it left off to the current time and returning the sum. What I want it to do is basically a backfill - start at the saved checkpoint & run the REST query every 30 seconds and save those values. Is that possible to do in the Add-On builder or does that require additional coding?
I am having an issue in Advanced hunting for Defender app in Splunk https://splunkbase.splunk.com/app/5518 My original KQL query in azure contains | JOIN KIND INNER. Is such syntax...
See more...
I am having an issue in Advanced hunting for Defender app in Splunk https://splunkbase.splunk.com/app/5518 My original KQL query in azure contains | JOIN KIND INNER. Is such syntax also possible in SPL?
Hello Splunkers!! After resetting my admin password, the users' accounts are gone and they are not visible in the UI. How can I restore all those user login accounts? In the above picture fro...
See more...
Hello Splunkers!! After resetting my admin password, the users' accounts are gone and they are not visible in the UI. How can I restore all those user login accounts? In the above picture from the backend I can see the users. but from the UI no user account is visible.
Could you please confirm whether Splunk utilizes TLS/SSL for the following communications by default or it should be manually configured:
Between Search Heads and Indexers
Between Indexers (Ind...
See more...
Could you please confirm whether Splunk utilizes TLS/SSL for the following communications by default or it should be manually configured:
Between Search Heads and Indexers
Between Indexers (Index Clustering)
Hi, We are integrating the Splunk to our Microsoft Azure SSO, and followed instructions from https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutorial#conf...
See more...
Hi, We are integrating the Splunk to our Microsoft Azure SSO, and followed instructions from https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutorial#configure-microsoft-entra-sso But after all the configuration, we are hitting the "No valid Splunk role found in local mapping" Also checked Configure SSO with Microsoft Azure AD or AD FS as your Identity Provider - Splunk Documentation to remove the alias but was not able to make it work.
Hello, We are ingesting Checkpoint logs through an Edge Processor to our SCP. We have deployed Splunk Add-on for Check Point Log Exporter in SCP but events are not parsing properly. I show you in...
See more...
Hello, We are ingesting Checkpoint logs through an Edge Processor to our SCP. We have deployed Splunk Add-on for Check Point Log Exporter in SCP but events are not parsing properly. I show you in a screenshot: We only can use these fields, related to the EP Could someone help us? Thank's in advance
Hi Team, We use mongo db python script to get the logs into splunk We could see historical logs are getting ingested, we changed the checkpoint value to 2024-05-03 08:46:13.327000 After that it...
See more...
Hi Team, We use mongo db python script to get the logs into splunk We could see historical logs are getting ingested, we changed the checkpoint value to 2024-05-03 08:46:13.327000 After that it worked fine for sometime, again we could see historical data getting ingested at 2 am How to fix it? What would be the checkpoint file value?