I completely agree with what you've stated below @gcusello @isoutamo @ITWhisperer @PickleRick , and I'm on the same page. However, as you know, compliance principles operate on the premise that wheth...
See more...
I completely agree with what you've stated below @gcusello @isoutamo @ITWhisperer @PickleRick , and I'm on the same page. However, as you know, compliance principles operate on the premise that whether an issue is present or not, it's best to assume it is and address it accordingly. In my situation, we mainly deal with network-related data where the likelihood of finding PII is very low. Nonetheless, as a security requirement, we want to establish controls that ensure any sensitive information, if present, is masked.
not really, the main point in here is that my input to this query, instead of a simple value would be an array. e.g. current input format:
| eval Tag = "Tag1"
desired input format:
| eval...
See more...
not really, the main point in here is that my input to this query, instead of a simple value would be an array. e.g. current input format:
| eval Tag = "Tag1"
desired input format:
| eval Tags = ["Tag3", "Tag4"]
for this | eval Tags = ["Tag3", "Tag4]
| spath
| foreach *Tags{}
[| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))]
| dedup tags
| stats values(tags) ...
See more...
for this | eval Tags = ["Tag3", "Tag4]
| spath
| foreach *Tags{}
[| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))]
| dedup tags
| stats values(tags) I would like to get Info.Apps.MessageQueue.ReportTags{}
Info.Apps.ReportingServices.ReportTags{}
Info.Apps.MessageQueue.UserTags{}
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERR...
See more...
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERROR AmazonElasticsearchMonitor - Unfortunately an issue has occurred: java.lang.NullPointerException: null at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.createMetricsProcessor(AmazonElasticsearchMonitor.java:77) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:45) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:36) ~[?:?] at com.appdynamics.extensions.aws.SingleNamespaceCloudwatchMonitor.getStatsForUpload(SingleNamespaceCloudwatchMonitor.java:31) ~[?:?] at com.appdynamics.extensions.aws.AWSCloudwatchMonitor.doRun(AWSCloudwatchMonitor.java:102) [?:?] at com.appdynamics.extensions.AMonitorJob.run(AMonitorJob.java:50) [?:?] at com.appdynamics.extensions.ABaseMonitor.executeMonitor(ABaseMonitor.java:199) [?:?] at com.appdynamics.extensions.ABaseMonitor.execute(ABaseMonitor.java:187) [?:?] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:149) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.runTask(PeriodicTaskRunner.java:86) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.run(PeriodicTaskRunner.java:47) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) [agent-24.10.0-891.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:128) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:215) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:253) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) [agent-24.10.0-891.jar:?] at java.lang.Thread.run(Thread.java:829) [?:?]" Can someone help here. We have used the below github code base for the same. https://github.com/Appdynamics/aws-elasticsearch-monitoring-extension
That was actually my first idea as well, but both our DNS servers are reachable, tcpdump shows no activity on port 53 during those 19s and Splunk is even able to reverse lookups on the sending device...
See more...
That was actually my first idea as well, but both our DNS servers are reachable, tcpdump shows no activity on port 53 during those 19s and Splunk is even able to reverse lookups on the sending devices' IPs.
Yes, it is in deed. I thought of that, but I assume that the creators at SC4S probably wanted the timestamp to have the fraction seconds added to it, if there is a metadata variable holding that. In ...
See more...
Yes, it is in deed. I thought of that, but I assume that the creators at SC4S probably wanted the timestamp to have the fraction seconds added to it, if there is a metadata variable holding that. In that case, the decimal point and the fraction seconds need to follow the timestamp without any whitespace. That is the reason the whitespace is missing where you pointed it out. If there isn't a variable holding the fraction seconds however, like in my case, there will be no trailing space added to the timestamp, and the host key-value pair will follow it right without a whitespace. Any idea how I could add a whitespace conditionally?
Don't know about this particular case but consistent delays on connection init are often caused by DNS issues (either DNS timeouts on resolving the host to connect to or delays on the receiving side ...
See more...
Don't know about this particular case but consistent delays on connection init are often caused by DNS issues (either DNS timeouts on resolving the host to connect to or delays on the receiving side due to attempts of resolving the IP back to hostname of the source host).
Hi @danielbb , as also @isoutamo and @kiran_panchavat said, 8089 is a management port that cannot be used via GUI, in addition, connections using 8089 are all in https, not http. Ciao. Giuseppe
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data ev...
See more...
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data even if you change the data filename. The only way to reindex an already idexed data is if you used crcSal = <SOUCE> in your inputs.conf stanzas and you changed the data filename. Final question: all the changes to a conf file (not by GUI) require a splunk restart on the machine. Ciao. Giuseppe
Wait. What do you mean by "expand to a cluster"? And what are you trying to achieve? I understand that initally you have an all-in-one installation. What architecture are you aiming at? Cluster (un...
See more...
Wait. What do you mean by "expand to a cluster"? And what are you trying to achieve? I understand that initally you have an all-in-one installation. What architecture are you aiming at? Cluster (unless explicitly referenced to as SH cluster) typically means cluster of indexers with a Cluster Manager. For that you need at least a single separate SH. So for a clustered installation you need at least three nodes - one SH, one CM and at least one indexer. The first thing to do if you indeed have an AIO setup would be to add an external SH and turn your existing server into a pure indexer. After you have done that you might think of converting the indexer to a cluster node.
Hi @desmando , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment in few words: install the same...
See more...
Hi @desmando , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment in few words: install the same Splunk version on the new Indexers and Cluster Manager, configure the CM as Cluster Manager node, configure IDXs as peer nodes, modify the IDX configurations for a cluster, deploy the configurations of the old IDX to both the peers using the CM, configure the SH to access the cluster. In the CM, you should see both the IDXs and all the indexes replicated. Remember that only new data are replicated between the IDXs, old ones aren't replicated, To replicate also old data, you need a Splunk Professional Services or a Certified Core Consultant. Ciao. Giuseppe
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it...
See more...
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it re-index the same data or a portion of it twice? Or, since it's the same a app and same source, maybe there is no need to restart splunkd? Thanks
Hi @arusoft , the best approach should be to move all these knowledge objects in one or more custom apps, package them and upload them to your Splunk Cloud instance. The only issue that you could f...
See more...
Hi @arusoft , the best approach should be to move all these knowledge objects in one or more custom apps, package them and upload them to your Splunk Cloud instance. The only issue that you could find is if you used some special thing as scripts, custom commands , etc... because they aren't accepted in SC. In addition, you have to manually add to all dashboards, in the first row version="1.1" for this reason, maybe the process could be: group all your knowledge objects in one or more custom apps, install them in a stand alone 9.x Splunk instance, make all the changes to the dashboard, use the Upgrade Readiness App ( https://splunkbase.splunk.com/app/5483 ) in this system to highlight eventual anomalies, package your custom apps, upgrade one by one on SC, see the upgrade reports to identify eventual additional errors to solve. Ciao. Giuseppe