@gcusello @PickleRick I have changed my approach. I have used one script which copy the files from the network folder and paste it to local folder and changed the monitoring stranza in inputs.conf bu...
See more...
@gcusello @PickleRick I have changed my approach. I have used one script which copy the files from the network folder and paste it to local folder and changed the monitoring stranza in inputs.conf but this also not worked. Below I changed in inputs.conf
[monitor://C:\Windows\Temp\outgoing\*.xml]
disabled = false
index = new_demo_scada
host = VIDI
sourcetype = new_demo_scada
props & transform remains same.
Hi @ynag This is a legacy App please check this docu : https://docs.splunk.com/Documentation/CPVMwareDash/latest/CP/About It appears that you don't have sufficient permissions. To resolve thi...
See more...
Hi @ynag This is a legacy App please check this docu : https://docs.splunk.com/Documentation/CPVMwareDash/latest/CP/About It appears that you don't have sufficient permissions. To resolve this, please assign both the 'splunk_vmware_admin' and 'splunk_vmware_user' roles to the admin user. You can find detailed instructions in the documentation provided below: https://docs.splunk.com/Documentation/VMW/4.0.4/Installation/ConfigureuserrolesfortheSplunkAppforVMware
To inject trace context fields in logs, enable log correlation by setting the SIGNALFX_LOGS_INJECTION environment variable to true before running your instrumented application. Reference: https:/...
See more...
To inject trace context fields in logs, enable log correlation by setting the SIGNALFX_LOGS_INJECTION environment variable to true before running your instrumented application. Reference: https://github.com/signalfx/signalfx-dotnet-tracing/blob/main/docs/correlating-traces-with-logs.md After enabling this environment variable: SIGNALFX_LOGS_INJECTION, I was able to see the traceId values in Splunk.
Hi @uagraw01 , probably something changed! analyze from scratch the input, starting from thetimestamp, that I dont see where it comes from. Ciao. Giuseppe
Looking to create a report showing the uptime of all hosts in a specific index which ingest data via a UF. I would like to see over the past 30 days, what was the percentage of uptime per host in tha...
See more...
Looking to create a report showing the uptime of all hosts in a specific index which ingest data via a UF. I would like to see over the past 30 days, what was the percentage of uptime per host in that index=abc. I am trying to create a metrics report showing the frequency a host is logging to Splunk.
Anyway, regardless of the reason, if it used to work and stop, it would be prudent to troubleshoot for the cause instead of blindly trying to add a setting here and there. Check your splunkd.log on ...
See more...
Anyway, regardless of the reason, if it used to work and stop, it would be prudent to troubleshoot for the cause instead of blindly trying to add a setting here and there. Check your splunkd.log on the forwarder for errors. Check output of splunk list inputstatus and splunk list monitor
OK. So this is not about the searching itself but rather about the base/post-process search functionality within the dashboard. It's a completely different topic. Base search should be a reporting se...
See more...
OK. So this is not about the searching itself but rather about the base/post-process search functionality within the dashboard. It's a completely different topic. Base search should be a reporting search and should not return an overly huge number of results. Otherwise you might get into some unpredictable results (and there was definitely something about specifying a list of fields but I can't recall the details). Anyway, it's usually not a good practice to return a raw list of events from the base search and then postprocess it with stats as the "refining" search. The approach should be to generate all (possibly relatively fairly detailed) stats in the base search and aggregate them the way you want in the post-process search.
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like...
See more...
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like to also ingest the older data that was log-rotated, but for the purpose of ingesting, those files were untarred again, so I have mail.log.1 to mail.log.4 I have tried numerous stanzas and regexes in the whitelist, but none lead to the older data getting ingested. The one I currently have in place is: [monitor:///var/log/] index = postfix sourcetype = postfix_syslog whitelist = (mail\.log$|mail\.log\.\d+) Thanks for any suggestions in advance.
Hi @uagraw01, If there are too many files in that folder you can try adding "ignoreOlderThan" setting in monitor stanza; [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*]
disabled = false
...
See more...
Hi @uagraw01, If there are too many files in that folder you can try adding "ignoreOlderThan" setting in monitor stanza; [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*]
disabled = false
index = scada
host = WALVAU-SCADA-1
sourcetype = cm_scada_xml
ignoreOlderThan = 24h
Actually, there is _raw after transaction. It's comprised of merged values of _raw field of events making up the transaction. But the question is whether there are any events matching this condition...
See more...
Actually, there is _raw after transaction. It's comprised of merged values of _raw field of events making up the transaction. But the question is whether there are any events matching this condition. First think I'd check would be to search without the "NOT" condition and see if it matches any events at all.
Hello @PickleRick Thank you for your feedback, I will try to provide the maximum of details here: - We have a dashboard using simple searches, in single value panels, in every single value we ha...
See more...
Hello @PickleRick Thank you for your feedback, I will try to provide the maximum of details here: - We have a dashboard using simple searches, in single value panels, in every single value we have this kind of query : index=x sourcetype=z filter1=a filter2=b | stats dc(value) as nb_value - For optimization inqueries we had to use a base search containing the first part of the query, when called in a single value panel, it did not provide any result , so we defined the fields we wanted to extract with the fields command and applied the stats dc right after, we have noticed that we had less results (turned also into verbose mode) , when replaced the fields with table command we had the exact number. PS: we have no errors just noticed the big difference in results , we are in splunkcloud. Thank you
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like | search NOT message="*Failed Processing Concu...
See more...
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like | search NOT message="*Failed Processing Concur*"