All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n... See more...
index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | eventstats count as failed_count by IONS | where failed_count>=10 | timechart dc(IONS) as IONS span=1d This command does get me the last 24 hours (11/5/23-11/6/23) stats accurately.  However, when I change the time picker to 30 days it shows a very large number for 11/5,11/6 and every day in that 30 day period.  I need the timechart to show only the IONS that have disconnected 10 or more times and show that number daily in a line chart.  I can't seem to get this to work.  Thank you! 
Hi @olawalePS , are you sure about the time format? could you share a sample of your logs? Ciao. Giuseppe
You can edit props on the SH by going to Settings->Source types, but that won't do you much good.  The props in question are index-time and must be installed on indexers.  To make the change, create ... See more...
You can edit props on the SH by going to Settings->Source types, but that won't do you much good.  The props in question are index-time and must be installed on indexers.  To make the change, create or update an app on the Cluster Manager and apply the bundle.  If you don't have access then find someone who does.
The results remained the same.
Renaming it did not change the results.
I guess I was trying to overcomplicate things.  Its starting to make more sense now working through all of these different scenarios.   Thanks for the explination...
Hey Yann, I tried following.  Create a schema in Analytics using following steps in : https://docs.appdynamics.com/appd/21.x/21.3/en/extend-appdynamics/appdynamics-apis/analytics-events-api#id-.An... See more...
Hey Yann, I tried following.  Create a schema in Analytics using following steps in : https://docs.appdynamics.com/appd/21.x/21.3/en/extend-appdynamics/appdynamics-apis/analytics-events-api#id-.AnalyticsEventsAPIv21.3-create_schemaCreateEventSchema Instead of DB agent custom metric we can use shell script to get the data from DB and post it to this schema created in step 1 and run this script via Command Watcher extension or cron job. (You can modify SQL monitor extension as well to send data to event service schema created in step 1. Below is the data which I got into events service using a script which posts data to Jetmail schema which I created using step 1 above.  Regards, Gopikrishnan R.
Hi @richgalloway , Now where I can edit the props in the SH, I dnt have backend access ?
Hello @Chandukumar.Peddapatla, What are you asking or needing help with exactly?
Hi @Naveen.Reddy, Unfortunately, your post was flagged as spam and has just been released into the Community for visibility. Have you happened to find any new info or a solution since posting?
That's three different fields, which you aren't including in your table command (so that would be dropped). Perhaps you should consider concatenating the counts and the elapse times (much like you di... See more...
That's three different fields, which you aren't including in your table command (so that would be dropped). Perhaps you should consider concatenating the counts and the elapse times (much like you did with the category and time) before the untable, then, splitting them out again later?
It's derived through the | eval  | eval CareCnts=spath(json, "Info.Care.Redcount") | eval CoverCnts=spath(json, "Info.Cover.Redcount") | eval NonCoverCnts=spath(json, "Info.NonCover.Redcount")
I spoke too soon.  It appears that the numbers are not accurate.  It shows the proper number if I set the time picker to last 24 hours but once I select last 30 days the number for yesterday increase... See more...
I spoke too soon.  It appears that the numbers are not accurate.  It shows the proper number if I set the time picker to last 24 hours but once I select last 30 days the number for yesterday increase by hundreds.  
I haven't specifically worked on this Add-on but just adding a cert somewhere won't be enough. You'll need to refer to it in a conf file. I'd start by looking at the app doc (if any) or the code to s... See more...
I haven't specifically worked on this Add-on but just adding a cert somewhere won't be enough. You'll need to refer to it in a conf file. I'd start by looking at the app doc (if any) or the code to see if it mentions what config file to use.
Hello, First of all, I'm far far away from Java scripting. But maybe those who know this could help: Seems to me Splunk removed the moment.js after the update. For me, it's still can be found in  /... See more...
Hello, First of all, I'm far far away from Java scripting. But maybe those who know this could help: Seems to me Splunk removed the moment.js after the update. For me, it's still can be found in  /opt/splunk/quarantined_files/share/splunk/search_mrsparkle/exposed/js/contrib/ folder. The new(?) version is supposed to be here: /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/moment/lang/ , but it seems now "localised" Or I'm totally wrong Please share your thoughts, I faced the same issue with Cisco Cloud Security App   Regards, Norbert
Hello @bora.min, Let me reach out to the Accounts team and see what we can do for you here. I'll be in touch.
How is NewColumn derived, especially since you haven't included CareCnts, CoverCnts and NonCoverCnts in your first table command?
Hello @Guilherme.Drehmer, Thanks for sharing this. Are you intending to share this as a feature request? 
Hello @Jian.Zhang, I will reach out to the Accounts team and get this handled. Do you want all your AppD Account Data deleted (this includes your Community account. Meaning you will not be able to... See more...
Hello @Jian.Zhang, I will reach out to the Accounts team and get this handled. Do you want all your AppD Account Data deleted (this includes your Community account. Meaning you will not be able to sign back in here). Please let me know.
Hi @sekhar463 , let me understand: do you want only hosts present in both searches or what's the rule? if present in both searches: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/s... See more...
Hi @sekhar463 , let me understand: do you want only hosts present in both searches or what's the rule? if present in both searches: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="index1" \ (puppet-agent OR puppet)) AND *Error* AND "/Stage[" | rename host AS hostname | fields hostname ] | table hostname sourceIp | dedup hostname Ths search runs if results are less than 50,000, if they are more than 50,000 you need a different approach: (index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections) OR (index="index1" \ (puppet-agent OR puppet)) AND *Error* AND "/Stage[") | eval hostname=coalesce(hostname,host) | stats values(sourceIp) AS sourceIp dc(index) AS index_count BY hostname | where index_count=2 | fields - index_count Ciao. Giuseppe