Unsure what format "time" is in for you. You could try converting "time" into unix time (assuming it is populating differently from "_time"), and then converting it back into readable_time: | eva...
See more...
Unsure what format "time" is in for you. You could try converting "time" into unix time (assuming it is populating differently from "_time"), and then converting it back into readable_time: | eval time=(time, "%H:%M") (this may vary on its current format) | eval readable_time = strftime(_time, "%m/%d/%Y") be sure not to confuse "time" and "_time" _time is by default in unix time. https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions If this helped, karma is appreciated.
I've been asked to show how many events are over the p99 (for example) value for a search. I'm hopefully just missing something obvious, but I've not figured out how, or if, this is possible. Thx
In a Classic dashboard, I can add <option name="drilldown">row</option> and it highlights the entire row. My drilldown is not cell-specific, so this is what I want in my Dashboard Studio da...
See more...
In a Classic dashboard, I can add <option name="drilldown">row</option> and it highlights the entire row. My drilldown is not cell-specific, so this is what I want in my Dashboard Studio dashboard. I've tried: "viz_Yy6FIxmM": {
"type": "splunk.table",
"options": {
"tableFormat": {
"align": "> table |pick(alignment)"
},
"drilldown": "row",
... with no change. Thx!
OK, thanks so much, I will review the information you provided. Since our Splunk Enterprise instance is running on a virtual machine - the info you provided could be the issue. Splunk internals are ...
See more...
OK, thanks so much, I will review the information you provided. Since our Splunk Enterprise instance is running on a virtual machine - the info you provided could be the issue. Splunk internals are tricky, I will check things over. Thanks Again, EWHOLZ
Hi, how to convert UTC time into mmddyy format. I tried this query for search | makeresults | eval time | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table...
See more...
Hi, how to convert UTC time into mmddyy format. I tried this query for search | makeresults | eval time | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table time, readable_time The time format is time: 1717690912746 Thank you
I’m encountering an issue while trying to instrument AppDynamics in a Node.js microservice. Here are the details: Node.js version: v14.21.2 npm version: 8.19.4 Operating system: Linux BBTMF-4AF6...
See more...
I’m encountering an issue while trying to instrument AppDynamics in a Node.js microservice. Here are the details: Node.js version: v14.21.2 npm version: 8.19.4 Operating system: Linux BBTMF-4AF63ZF83 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux AppDynamics version: 23.3.0 The installation process (npm install --legacy-peer-deps) succeeds, but when I run npm run start, I encounter an error message related to appdynamics-libagent-napi. Has anyone else faced a similar issue with this library? Any ideas on how to resolve it? I’m running out of solutions here.
I thought that was how it was done. The user accounts are still in the renamed passwd file. Put it back (after transferring the admin password from the new passwd file), restart, and the accounts s...
See more...
I thought that was how it was done. The user accounts are still in the renamed passwd file. Put it back (after transferring the admin password from the new passwd file), restart, and the accounts should be restored.
Hi gcusello , My indexer hosts less than 50 hosts. Environment is rather small. Format of the date is American (mm/dd/yyyy). I can also see that when I do an ausearch -i from the audit.log. The ...
See more...
Hi gcusello , My indexer hosts less than 50 hosts. Environment is rather small. Format of the date is American (mm/dd/yyyy). I can also see that when I do an ausearch -i from the audit.log. The date is the format I'd expect as indicated above. In terms of the _internal index, dates are correct. Yesterday's _internal index showed 06/05/2024 and the logs themselves were the same. I'm not sure I'm answering your question, however. Let me know if you needed other information about _*indexes. I will try setting props.conf on this one particular host and reply back with the results tomorrow.
It looks like the last version of this add-on isn't compatiable with due to the add-on builder used, and the app version 1.0.2 has to be used wih SC vs. the latest release of 1.1.1. Can anyone co...
See more...
It looks like the last version of this add-on isn't compatiable with due to the add-on builder used, and the app version 1.0.2 has to be used wih SC vs. the latest release of 1.1.1. Can anyone confirm if there are plans to re-compile the add-on with the latest add-on builder and if the latest app is support on Splunk Cloud?
I have the same problem, but when I go to change the file $SPLUNK_HOME/etc/apss/TA-openai-api\local\passwords.conf as is describe in de github repository there is not passwords.conf file in $SPLUNK_...
See more...
I have the same problem, but when I go to change the file $SPLUNK_HOME/etc/apss/TA-openai-api\local\passwords.conf as is describe in de github repository there is not passwords.conf file in $SPLUNK_HOME/etc/apss/TA-openai-api\local\ I hope ypu can help me. Thanks in advance
Hi all, I am attempting to create an add-on with the Add-On Builder that queries a REST API for stats and saves them to a metrics index. A requirement is that it needs to be smart during outages & re...
See more...
Hi all, I am attempting to create an add-on with the Add-On Builder that queries a REST API for stats and saves them to a metrics index. A requirement is that it needs to be smart during outages & recovery. I thought I could just add checkpointing to the add-on. As a test, I stopped Splunk for a few minutes, then started it up and got this: The spike is where Splunk was restarted. What appears to be happening is that the checkpoint is starting where it left off but just doing 1 query from where it left off to the current time and returning the sum. What I want it to do is basically a backfill - start at the saved checkpoint & run the REST query every 30 seconds and save those values. Is that possible to do in the Add-On builder or does that require additional coding?
Yes I did and put in the Object ID of the Application created on Azure as the Group Name. I'm trying to figure out how is it not working. Did I put the correct Object ID?