Thanks for the reply. Unfortunately that is not an option as we need to keep the logs from all the servers and they all live on giant RAMDISK's and when the system is shutdown it all goes away except...
See more...
Thanks for the reply. Unfortunately that is not an option as we need to keep the logs from all the servers and they all live on giant RAMDISK's and when the system is shutdown it all goes away except for this one host. I was hoping that we can somehow massage the data (Heavy forwarder maybe?) on the log aggregator and push it to splunk with the correct hostname somehow?
How to display one row table in a pie chart? Thank you for your help. index=test ---- Score calculation ----- | table Score1, Score2, Score3, Score4 Score1 Score2 Score3 Score4 70 50...
See more...
How to display one row table in a pie chart? Thank you for your help. index=test ---- Score calculation ----- | table Score1, Score2, Score3, Score4 Score1 Score2 Score3 Score4 70 50 60 90 My expected Pie Chart:
Hi @Beshoy.Shaher,
There is an existing Community post that seems to be talking about the same subject. Can you please check it out and see if it helps.
https://community.appdynamics.com/t5/Busi...
See more...
Hi @Beshoy.Shaher,
There is an existing Community post that seems to be talking about the same subject. Can you please check it out and see if it helps.
https://community.appdynamics.com/t5/Business-iQ-Analytics/Starting-Events-Service-cluster/m-p/44127
Hello Thank you, that indeed solved my issue. I also noticed that there are some screenshots in your documentation that are not up to date. It would be worth updating it for other users. Thank...
See more...
Hello Thank you, that indeed solved my issue. I also noticed that there are some screenshots in your documentation that are not up to date. It would be worth updating it for other users. Thanks again for your response!
@richgalloway Hi, Instead of using SEDCMD-rm can we use like this ? REGEX = cs4=(\w+) cs4Label=([\s]+) FORMAT = $2::$1 eg: cs4= will smith cs4Label=suser display name Thanks..
Hi folks I've a KVstore containing the following values: hostname, IP address. This KVstore is updated every hour to ensure that the host name and IP address always match. The KVStore is updated us...
See more...
Hi folks I've a KVstore containing the following values: hostname, IP address. This KVstore is updated every hour to ensure that the host name and IP address always match. The KVStore is updated using saved search. The reason is that the environment (IP … hostname relationship) changes very often (DHCP) I was thinking about the automatic lookup when logs containing an IP address are ingested to enrich them with the hostname corresponding at the time of ingestion and not the one corresponding during the next search? Unfortunately ingest-time lookup is not available in Splunk Cloud Platform and this functionality is also only for CSV file lookup. And also the solution with intermediate forwarder is not suitable for me. Any advice ?
I had a missing data from a certain date and time range. How would i re-ingest the data into splunk from a UF.
Below is the inputs.conf
[monitor:///app/java/servers/app/log/app.log.2023-11-12...
See more...
I had a missing data from a certain date and time range. How would i re-ingest the data into splunk from a UF.
Below is the inputs.conf
[monitor:///app/java/servers/app/log/app.log.2023-11-12]
index = app_logs
ignoreOlderThan = 10d
disabled = false
sourcetype = javalogs
Its missing data from Nov-11 00:05 till Nov-12 13:00 so how would i just reinject the data only for that certain data/time period. It just one log file for a day although we have some events so how would i regest only the missing data for the time period and please let me know the config.
It sounds to me like when data is aggregated on the one server the original host information is lost. Would it be possible for each RHEL7 host to forward their logs directly to Splunk? That would p...
See more...
It sounds to me like when data is aggregated on the one server the original host information is lost. Would it be possible for each RHEL7 host to forward their logs directly to Splunk? That would preserve the host information.
Hello, I have the below Splunk search and I want to put the results into a line graph so I can compare all of the disk instances e.g. C, D , F over a period of time. The search that I am using ...
See more...
Hello, I have the below Splunk search and I want to put the results into a line graph so I can compare all of the disk instances e.g. C, D , F over a period of time. The search that I am using is: index=windows_perfmon eventtype="perfmon_windows" Host="XXXX" object="LogicalDisk" counter="% Disk Write Time" instance="*" AND NOT instance=_Total AND NOT instance=Hard* | stats latest(Value) as Value by _time, instance | eval Value=round(Value, 2) Any advise as I would like to create this in a line graph visualisation with the instances on different lines so you can do trend analysis on the Disk Write Time. The results I am getting are: _time instance value 2023-11-15 15:28:02 C: 2.83 2023-11-15 15:28:02 D : 0.01 2023-11-15 15:33:02 C: 4.10 2023-11-15 15:33:02 0.01 2023-11-15 15:38:02 C: 2.59 2023-11-15 15:38:02 0.01 2023-11-15 15:43:02 C: 1.98 2023-11-15 15:43:02 0.01 2023-11-15 15:48:02 C: 2.81 2023-11-15 15:48:02 0.01 2023-11-15 15:53:02 C: 2.51 2023-11-15 15:53:02 0.01
The highest value for frozenTimePeriodInSecs is 4294967295 (136 years). There are a few size limit settings. Which ones to use depend on if you use volumes or SmartStore. Check out maxTotalDataSiz...
See more...
The highest value for frozenTimePeriodInSecs is 4294967295 (136 years). There are a few size limit settings. Which ones to use depend on if you use volumes or SmartStore. Check out maxTotalDataSizeMB, maxGlobalRawDataSizeMB, maxGlobalDataSizeMB, homePath.maxDataSizeMB, and coldPath.maxDataSizeMB, all of which have the same maximum value (4294967295).
Hi @doadams85, after you installe the Splunk Universla forwarder on the target host did you: configure the Indexer to receive logs from forwarders (by default on port 9997) configure your UF to s...
See more...
Hi @doadams85, after you installe the Splunk Universla forwarder on the target host did you: configure the Indexer to receive logs from forwarders (by default on port 9997) configure your UF to send logs to that Indexer (outputs.conf)? install the Splunk_TA-for Linux? enable the input stanzas? Ciao. Giuseppe
Hi there, I have multiple panels added in a dashboard and I would like to reduce the font size of the entire dashboard contents - the dashboard is being created using classic dash, not the studio...
See more...
Hi there, I have multiple panels added in a dashboard and I would like to reduce the font size of the entire dashboard contents - the dashboard is being created using classic dash, not the studio. Is there a possibility to achieve this? Ty!
KV_MODE=auto means Splunk will automatically extract fields when it finds data in key=value format. KV_MODE=none means Splunk disables search-time extraction of the host, source, and sourcetype fie...
See more...
KV_MODE=auto means Splunk will automatically extract fields when it finds data in key=value format. KV_MODE=none means Splunk disables search-time extraction of the host, source, and sourcetype fields. This can be useful if you extract these fields yourself. The add-on builder must be used in a local, non-clustered instance. It should work on Windows, but I've not done so myself. Apps built on a Windows platform will not pass Splunk Cloud app vetting because Windows does not set the file permissions correctly.
Click on the > symbol on each line to see more information about the failure. Then there will be a button you can click to get specific information on how to remediate the problem. Often, that invo...
See more...
Click on the > symbol on each line to see more information about the failure. Then there will be a button you can click to get specific information on how to remediate the problem. Often, that involves installing a newer version of an app. Other times, you simply need to add 'version="1.1"' to the first line of each dashboard source. Some fixes require python code changes and Splunk will offer suggestions for them.