The statement is not working. According to the above selection made, the earliest time should be 5/8/2024 05:00:00 & latest time should be 5/9/2024 06:00:00 (because the time span selected i...
See more...
The statement is not working. According to the above selection made, the earliest time should be 5/8/2024 05:00:00 & latest time should be 5/9/2024 06:00:00 (because the time span selected is +1d) but it is not working despite of using the below eval statement. <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Results:
@Ryan.Paredez I have tried again installation on new VM. I did all steps as mentioned. I am able to see the Custom Metric/Linux Monitor folder under the VM on AppD dashboard. But under mountedNFSSt...
See more...
@Ryan.Paredez I have tried again installation on new VM. I did all steps as mentioned. I am able to see the Custom Metric/Linux Monitor folder under the VM on AppD dashboard. But under mountedNFSStatus i am not getting any data. Sharing below snapshot. Also i am getting nullpointer exception in machine agent logs. vm==> [Agent-Monitor-Scheduler-1] 13 May 2024 05:56:29,943 INFO MetricWriteHelperFactory-Linux Monitor - The instance of MetricWriteHelperFactory is com.appdynamics.extensions.MetricWriteHelper@e8e0a3b vm==> [Monitor-Task-Thread3] 13 May 2024 05:56:30,446 ERROR NFSMountMetricsTask-Linux Monitor - Exception occurred collecting NFS I/O metrics java.lang.NullPointerException: null at com.appdynamics.extensions.linux.NFSMountMetricsTask.getMountIOStats(NFSMountMetricsTask.java:173) [?:?] at com.appdynamics.extensions.linux.NFSMountMetricsTask.run(NFSMountMetricsTask.java:66) [?:?] at com.appdynamics.extensions.executorservice.MonitorThreadPoolExecutor$TaskRunnable.run(MonitorThreadPoolExecutor.java:113) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] vm==> [Monitor-Task-Thread1] 13 May 2024 05:56:30,763 INFO LinuxMonitorTask-Linux Monitor - Completed the Linux Monitoring task
Big thanks to you, @ITWhisperer ,The solution works flawlessly, and I'm particularly impressed by the elegant utilization of the foreach command. It perfectly aligns with our exact requirements. Tha...
See more...
Big thanks to you, @ITWhisperer ,The solution works flawlessly, and I'm particularly impressed by the elegant utilization of the foreach command. It perfectly aligns with our exact requirements. Thanks for the guidance and assistance .
Attached sample data of two tables. for each SNC1, SNC2, there will be data for each 15 mins and values can be different. Now the idea is to do timeseries for each SNC any of the values and filterin...
See more...
Attached sample data of two tables. for each SNC1, SNC2, there will be data for each 15 mins and values can be different. Now the idea is to do timeseries for each SNC any of the values and filtering will be mainly based on SNC and any of the values (one or more values at the same time )
Report data would be as below par1 time b e f g l m n r s SNC1 12/5/2024 16:30 299367 -7.7 -7.9 -7.7 1.00E-37 1.00E-37 1.80E-07 13.93 12.91 SNC1 12/5/2024 16:45 299364...
See more...
What @richgalloway said, but whenever you reference a JSON field containing dots in the right hand side of an eval you MUST wrap the field name in single quotes, i.e. the first suggestion should be ...
See more...
What @richgalloway said, but whenever you reference a JSON field containing dots in the right hand side of an eval you MUST wrap the field name in single quotes, i.e. the first suggestion should be eval Error=case(isnotnull('attr.error'), 'attr.error',
isnotnull('attr.error.errmsg'), 'attr.error.errmsg') but for your solution the coalesce() option would make sense - note there the use of single quotes - always for the right hand side of the eval. This applies not just to JSON field names, but any field name that contains non simple characters or field names that start with numbers.
See this https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Formatassetoridentitylist So your search will be index=my_asset_source ...
| eval priority="high"
| table nt_host priority ...
| outpu...
See more...
See this https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Formatassetoridentitylist So your search will be index=my_asset_source ...
| eval priority="high"
| table nt_host priority ...
| outputlookup my_asset_definition.csv You just need to fill in the gaps so you can collect the fields mentioned in the document. Set the priority as you want it to be based on your business rules for defining how you want to assign priority.
Hi @uagraw01, Browsers will not trust your self-signed certificates without additional configuration. In most cases, you'll want to use a certificate signed by a mutually trusted certificate authori...
See more...
Hi @uagraw01, Browsers will not trust your self-signed certificates without additional configuration. In most cases, you'll want to use a certificate signed by a mutually trusted certificate authority. This is not an endorsement of Qualys, but https://www.ssllabs.com/ provides general information on SSL/TLS that you may find beneficial.
Yes, it is possible to upgrade forwarders first. As you've noted, that is contrary to the published procedures and may not work. Also, Splunk version 7.3.1 is well outdated so there is no guidance ...
See more...
Yes, it is possible to upgrade forwarders first. As you've noted, that is contrary to the published procedures and may not work. Also, Splunk version 7.3.1 is well outdated so there is no guidance about its compatibility with other versions. This will be an interesting experiment. Please let us know how it goes.
Hello, There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers The current version of splunk universal forwarder ...
See more...
Hello, There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers The current version of splunk universal forwarder is 7.0.3.0 We want to rise it to the 9.21 Would that version works for the time being with Splunk Enterprise 7.3.1? I know it would be better first upgrade the enterprise, as best practice is to use indexers with versions that are the same or higher than forwarder versions. (but there is hesitation to upgrade indexers first, as it's used also for data from production) But would it be possible to do forwarders first? Edit: Upgrade was succesfull
Check for the existence of a field with the isnotnull() function. eval Error=case(isnotnull(attr.error), 'attr.error', isnotnull(attr.error.errmsg), 'attr.error.errmsg') or use the coalesce() funct...
See more...
Check for the existence of a field with the isnotnull() function. eval Error=case(isnotnull(attr.error), 'attr.error', isnotnull(attr.error.errmsg), 'attr.error.errmsg') or use the coalesce() function, which does the tests for you and selects the first listed field that is not null. eval Error=coalesce('attr.error','attr.error.errmsg')
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg. i have tried the below code. only one case works other c...
See more...
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg. i have tried the below code. only one case works other case fails. please advise eval Error=case(NOT attr.error =="*", 'attr.error',NOT attr.error.errmsg =="*", 'attr.error.errmsg')
Assuming all your "dynamic" fields follow naming convention, try this | foreach rw*
[| eval maxelements=if(isnull(maxelements),mvcount('<<FIELD>>'),if(maxelements<mvcount('<<FIELD>>'),mvcount('<...
See more...
Hi @sanjai, If your original values come from separate events, then a simple table may be all you need: | table ds_file_path rwws01 rwmini01 However, the x-axis is a bit wordy. Can you provid...
See more...
Hi @sanjai, If your original values come from separate events, then a simple table may be all you need: | table ds_file_path rwws01 rwmini01 However, the x-axis is a bit wordy. Can you provide a mock sample of your original data and a drawing of your target visualization?
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01 rw...
See more...
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01 rwmini01 ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.56 0.98 0.99 5.99 9.04 8.05 5.09 5.66 7.99 8.99 In this output chart table, the fields rwws01 and rwmini01 are dynamic, so hardcoding them isn't feasible. The current output format is causing challenges in visualizing the data into a line chart. My requirement is get output : ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.98 5.99 \\swmfs\orca_db_january_2024\topo\raster.ds 0.99 3.56 \\swmfs\orca_db_january_2024\topo\raster.ds 0.56 4.78 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 9.08 \\swmfs\orca_db_january_2024\topo\raster.ds NULL( or 0) 2.98 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 5.88 I tried different commands and function, but nothing gave me the desired output, I'm seeking suggestions on how to achieve this single value field format or alternative functions and commands to achieve this output and create a line chart effectively. Your insights and guidance would be greatly appreciated! Thank you.