Hi @andgarciaa , are you speaking of Splunk Cloud or On-premise? if Splunk Cloud, you have to ask to your Splunk Sales. If on premise, the only cost is the additional storage that you can estimate...
See more...
Hi @andgarciaa , are you speaking of Splunk Cloud or On-premise? if Splunk Cloud, you have to ask to your Splunk Sales. If on premise, the only cost is the additional storage that you can estimate duplicating the actual storage. Ciao. Giuseppe
If I have an index with a retention of 90 days. Can I make a rough estimate about the cost of increasing the retention of index= index-name extra 90 day?
Hi, we decided to create backups and just go for it. It worked fine! After upgrade everything is indexing without any issues. Also no problem during upgrade from msi. Thanks for giving us a little ...
See more...
Hi, we decided to create backups and just go for it. It worked fine! After upgrade everything is indexing without any issues. Also no problem during upgrade from msi. Thanks for giving us a little courage I guess. We decided to "experiment". For the greater good heh.
If you mean to change the standard timepicker to include your special options into a modified timepicker, try adding new timeranges: Time ranges are configured in Settings -> Knowledge -> User interf...
See more...
If you mean to change the standard timepicker to include your special options into a modified timepicker, try adding new timeranges: Time ranges are configured in Settings -> Knowledge -> User interface -> Time ranges section of the Splunk interface.
Explain me construction structure of configuration file in splunk and what all component it contain and what we call them. [what are imp configuration files in splunk, what is the purpose of these ...
See more...
Explain me construction structure of configuration file in splunk and what all component it contain and what we call them. [what are imp configuration files in splunk, what is the purpose of these diffenet files. If a file suppose inputs.conf is present in multiple apps then how splunk will consolidate it. what is the file precedency order. can i have my own configuration file name like my nameinputs.conf file, will it work and how.]
The statement is not working. According to the above selection made, the earliest time should be 5/8/2024 05:00:00 & latest time should be 5/9/2024 06:00:00 (because the time span selected i...
See more...
The statement is not working. According to the above selection made, the earliest time should be 5/8/2024 05:00:00 & latest time should be 5/9/2024 06:00:00 (because the time span selected is +1d) but it is not working despite of using the below eval statement. <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Results:
@Ryan.Paredez I have tried again installation on new VM. I did all steps as mentioned. I am able to see the Custom Metric/Linux Monitor folder under the VM on AppD dashboard. But under mountedNFSSt...
See more...
@Ryan.Paredez I have tried again installation on new VM. I did all steps as mentioned. I am able to see the Custom Metric/Linux Monitor folder under the VM on AppD dashboard. But under mountedNFSStatus i am not getting any data. Sharing below snapshot. Also i am getting nullpointer exception in machine agent logs. vm==> [Agent-Monitor-Scheduler-1] 13 May 2024 05:56:29,943 INFO MetricWriteHelperFactory-Linux Monitor - The instance of MetricWriteHelperFactory is com.appdynamics.extensions.MetricWriteHelper@e8e0a3b vm==> [Monitor-Task-Thread3] 13 May 2024 05:56:30,446 ERROR NFSMountMetricsTask-Linux Monitor - Exception occurred collecting NFS I/O metrics java.lang.NullPointerException: null at com.appdynamics.extensions.linux.NFSMountMetricsTask.getMountIOStats(NFSMountMetricsTask.java:173) [?:?] at com.appdynamics.extensions.linux.NFSMountMetricsTask.run(NFSMountMetricsTask.java:66) [?:?] at com.appdynamics.extensions.executorservice.MonitorThreadPoolExecutor$TaskRunnable.run(MonitorThreadPoolExecutor.java:113) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] vm==> [Monitor-Task-Thread1] 13 May 2024 05:56:30,763 INFO LinuxMonitorTask-Linux Monitor - Completed the Linux Monitoring task
Big thanks to you, @ITWhisperer ,The solution works flawlessly, and I'm particularly impressed by the elegant utilization of the foreach command. It perfectly aligns with our exact requirements. Tha...
See more...
Big thanks to you, @ITWhisperer ,The solution works flawlessly, and I'm particularly impressed by the elegant utilization of the foreach command. It perfectly aligns with our exact requirements. Thanks for the guidance and assistance .
Attached sample data of two tables. for each SNC1, SNC2, there will be data for each 15 mins and values can be different. Now the idea is to do timeseries for each SNC any of the values and filterin...
See more...
Attached sample data of two tables. for each SNC1, SNC2, there will be data for each 15 mins and values can be different. Now the idea is to do timeseries for each SNC any of the values and filtering will be mainly based on SNC and any of the values (one or more values at the same time )
Report data would be as below par1 time b e f g l m n r s SNC1 12/5/2024 16:30 299367 -7.7 -7.9 -7.7 1.00E-37 1.00E-37 1.80E-07 13.93 12.91 SNC1 12/5/2024 16:45 299364...
See more...
What @richgalloway said, but whenever you reference a JSON field containing dots in the right hand side of an eval you MUST wrap the field name in single quotes, i.e. the first suggestion should be ...
See more...
What @richgalloway said, but whenever you reference a JSON field containing dots in the right hand side of an eval you MUST wrap the field name in single quotes, i.e. the first suggestion should be eval Error=case(isnotnull('attr.error'), 'attr.error',
isnotnull('attr.error.errmsg'), 'attr.error.errmsg') but for your solution the coalesce() option would make sense - note there the use of single quotes - always for the right hand side of the eval. This applies not just to JSON field names, but any field name that contains non simple characters or field names that start with numbers.
See this https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Formatassetoridentitylist So your search will be index=my_asset_source ...
| eval priority="high"
| table nt_host priority ...
| outpu...
See more...
See this https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Formatassetoridentitylist So your search will be index=my_asset_source ...
| eval priority="high"
| table nt_host priority ...
| outputlookup my_asset_definition.csv You just need to fill in the gaps so you can collect the fields mentioned in the document. Set the priority as you want it to be based on your business rules for defining how you want to assign priority.
Hi @uagraw01, Browsers will not trust your self-signed certificates without additional configuration. In most cases, you'll want to use a certificate signed by a mutually trusted certificate authori...
See more...
Hi @uagraw01, Browsers will not trust your self-signed certificates without additional configuration. In most cases, you'll want to use a certificate signed by a mutually trusted certificate authority. This is not an endorsement of Qualys, but https://www.ssllabs.com/ provides general information on SSL/TLS that you may find beneficial.
Yes, it is possible to upgrade forwarders first. As you've noted, that is contrary to the published procedures and may not work. Also, Splunk version 7.3.1 is well outdated so there is no guidance ...
See more...
Yes, it is possible to upgrade forwarders first. As you've noted, that is contrary to the published procedures and may not work. Also, Splunk version 7.3.1 is well outdated so there is no guidance about its compatibility with other versions. This will be an interesting experiment. Please let us know how it goes.