There is a REST endpoint, /services/search/v2/parser, you may be able to use to parse queries into the commands used. It requires the POST method so it will have to be used from a script (not from ...
See more...
There is a REST endpoint, /services/search/v2/parser, you may be able to use to parse queries into the commands used. It requires the POST method so it will have to be used from a script (not from the UI). See https://docs.splunk.com/Documentation/Splunk/9.1.1/RESTREF/RESTsearch#search.2Fv2.2Fparser
... I tested it if you copy back the moment.js from /opt/splunk/quarantined_files/share/splunk/search_mrsparkle/exposed/js/contrib/ to /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib...
See more...
... I tested it if you copy back the moment.js from /opt/splunk/quarantined_files/share/splunk/search_mrsparkle/exposed/js/contrib/ to /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/ The Cisco app works. But this is a hack and Splunk will complain about the file integrity. I think Cisco needs to update its app.
I have events like this : 11/06/2023 12:34:56 ip 1.2.3.4 This is record 1 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 1.0 0.0 2492 604 ? Ss 12:27 0:00 proc01 user 6 0.5 0.0...
See more...
I have events like this : 11/06/2023 12:34:56 ip 1.2.3.4 This is record 1 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 1.0 0.0 2492 604 ? Ss 12:27 0:00 proc01 user 6 0.5 0.0 2608 548 ? S 12:27 0:00 proc02 user 19 0.0 0.0 12168 7088 ? S 12:27 0:00 proc03 user 223 0.0 0.1 852056 39300 ? Ssl 12:27 0:00 proc04 user 470 0.0 0.0 7844 6016 pts/0 Ss 12:27 0:00 proc05 user 683 0.0 0.0 7872 3380 pts/0 R+ 12:37 0:00 proc06 11/06/2023 12:34:56 ip: 1.2.3.4 This is record 2 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 0.0 0.0 2492 604 ? Ss 12:27 0:00 proc07 user 6 9.0 0.0 2608 548 ? S 12:27 0:00 proc08 user 19 6.0 0.0 12168 7088 ? S 12:27 0:00 proc09 user 223 0.0 0.1 852056 39300 ? Ssl 12:27 0:00 proc10 user 470 0.0 0.0 7844 6016 pts/0 Ss 12:27 0:00 proc11 user 683 0.0 0.0 7872 3380 pts/0 R+ 12:37 0:00 proc12 and repeating with different data, but the same structure: record 1 of 18...record 2 of 18...etc. The dates and times are the same for each "subsection" of the ps command. I want to be able to make a graph of each "proc" to show their cpu and memory usage over time. The processes will be in a random order. I have the time line parsed with fields extracted (like the ip), and want the header of the ps command to be field names for the ps data. I'm struggling with this! I tried mvepand and/or max_match=0 but failed. Thanks for any help.
So I have attached to images Computers that have checked-in in less than 60 days (274) The subset of that that has CBC installed (270) What I want now is a query to identify the 4...
See more...
So I have attached to images Computers that have checked-in in less than 60 days (274) The subset of that that has CBC installed (270) What I want now is a query to identify the 4 devices that do not have the app installed
index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n...
See more...
index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | eventstats count as failed_count by IONS | where failed_count>=10 | timechart dc(IONS) as IONS span=1d This command does get me the last 24 hours (11/5/23-11/6/23) stats accurately. However, when I change the time picker to 30 days it shows a very large number for 11/5,11/6 and every day in that 30 day period. I need the timechart to show only the IONS that have disconnected 10 or more times and show that number daily in a line chart. I can't seem to get this to work. Thank you!
You can edit props on the SH by going to Settings->Source types, but that won't do you much good. The props in question are index-time and must be installed on indexers. To make the change, create ...
See more...
You can edit props on the SH by going to Settings->Source types, but that won't do you much good. The props in question are index-time and must be installed on indexers. To make the change, create or update an app on the Cluster Manager and apply the bundle. If you don't have access then find someone who does.
I guess I was trying to overcomplicate things. Its starting to make more sense now working through all of these different scenarios. Thanks for the explination...
Hey Yann, I tried following. Create a schema in Analytics using following steps in : https://docs.appdynamics.com/appd/21.x/21.3/en/extend-appdynamics/appdynamics-apis/analytics-events-api#id-.An...
See more...
Hey Yann, I tried following. Create a schema in Analytics using following steps in : https://docs.appdynamics.com/appd/21.x/21.3/en/extend-appdynamics/appdynamics-apis/analytics-events-api#id-.AnalyticsEventsAPIv21.3-create_schemaCreateEventSchema Instead of DB agent custom metric we can use shell script to get the data from DB and post it to this schema created in step 1 and run this script via Command Watcher extension or cron job. (You can modify SQL monitor extension as well to send data to event service schema created in step 1. Below is the data which I got into events service using a script which posts data to Jetmail schema which I created using step 1 above. Regards, Gopikrishnan R.
Hi @Naveen.Reddy,
Unfortunately, your post was flagged as spam and has just been released into the Community for visibility. Have you happened to find any new info or a solution since posting?
That's three different fields, which you aren't including in your table command (so that would be dropped). Perhaps you should consider concatenating the counts and the elapse times (much like you di...
See more...
That's three different fields, which you aren't including in your table command (so that would be dropped). Perhaps you should consider concatenating the counts and the elapse times (much like you did with the category and time) before the untable, then, splitting them out again later?
I spoke too soon. It appears that the numbers are not accurate. It shows the proper number if I set the time picker to last 24 hours but once I select last 30 days the number for yesterday increase...
See more...
I spoke too soon. It appears that the numbers are not accurate. It shows the proper number if I set the time picker to last 24 hours but once I select last 30 days the number for yesterday increase by hundreds.
I haven't specifically worked on this Add-on but just adding a cert somewhere won't be enough. You'll need to refer to it in a conf file. I'd start by looking at the app doc (if any) or the code to s...
See more...
I haven't specifically worked on this Add-on but just adding a cert somewhere won't be enough. You'll need to refer to it in a conf file. I'd start by looking at the app doc (if any) or the code to see if it mentions what config file to use.