Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. ...
See more...
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. My time range looks like: Earliest Time: -2h Latest Time: now cron schedule: 1 */2 * * * for each new search I add +1 to the minute tab of the cron schedule up to 59 and then start over. so on the next search the schedule would be 2 */2 * * * and so on... is there a more efficient way I should be scheduling searches? Thank you.
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given ...
See more...
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given only high and checked the Neap policy as well. under episode information found the severity we had given the same as the first event. Can someone please guide how to avoid the info episode and how to find the path to configure for info severity Regards, Nagalakshmi
Thanks for your help! I am still confusing how indexer cluster should be managed, if i want to create any KOs at the search head side, should i push these KOs to the indexers also?
You have only defined a single source and destination, you need to compare multiples. The color*/edge*/weight* is meant to independently review the spread across the comparisons values associated wi...
See more...
You have only defined a single source and destination, you need to compare multiples. The color*/edge*/weight* is meant to independently review the spread across the comparisons values associated with the source/destination groups. Look at the sample images of visualizations on the splunk base app page for prime examples. https://splunkbase.splunk.com/app/4611
Start with disk space - most likely Splunk is so busy trying to manage buckets and ingestion with no additional space that many activities are suffering.
At one point there was a Splunk-on-Splunk template for ITSI which worked wonders in a previous environment I monitored. I did supplement the existing template with the inbound syslog system monitori...
See more...
At one point there was a Splunk-on-Splunk template for ITSI which worked wonders in a previous environment I monitored. I did supplement the existing template with the inbound syslog system monitoring. However, I didn't do anything to monitor the Router,FW, and LB since the network was quite large and any HA activities would require additional details. It would have been too large a task for the return on value. Monitoring these items separately would have a lot of value and if port labels are informative then you can make up for the full integration map. IMO
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Wher...
See more...
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Where do I begin resolving my issue. Thanks
Hi @BRFZ , did you configured this server to send logs to the Indexers? did you opened the firewall routes between this server and Indexers on the port 9997? Make these checks. Ciao. Giuseppe
That the knowledge bundle is replicated to the search peers is correct but for the parsing (e.g. timestamp extraction) during indexing only the configuration from $SPLUNK_HOME/etc/peer-apps is used a...
See more...
That the knowledge bundle is replicated to the search peers is correct but for the parsing (e.g. timestamp extraction) during indexing only the configuration from $SPLUNK_HOME/etc/peer-apps is used as a source. So that's the reason why you must deploy the TA on the indexer if it is no HeavyForwarder inbetween. The knowledge bundle is used during the searching. https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Howindexingworks
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs ...
See more...
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs from the Search Head or Indexers. If you have any solutions to this problem, I would greatly appreciate your help.
You can do it easierwith following config: <format type="color" field="categoryId">
<colorPalette type="map">{"ACCESSORIES":#6DB7C6,"ARCADE":#F7BC38,"STRATEGY":#AFEEEE}</colorPalet...
See more...
You can do it easierwith following config: <format type="color" field="categoryId">
<colorPalette type="map">{"ACCESSORIES":#6DB7C6,"ARCADE":#F7BC38,"STRATEGY":#AFEEEE}</colorPalette>
</format> https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Table_format_source_code_example
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING So far I managed to integrate below CSS code in the panel (it colors all rows Blue): ...
See more...
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING So far I managed to integrate below CSS code in the panel (it colors all rows Blue): <html>
<style type="text/css">
.table-colored .table-row, td{
color:blue;
}
</style>
</html>
Does anyone know how to add conditions based on CELL Values e.g. if Cell = INFO -> Blue ,if Cell = ERROR -> Red , if Cell = WARNING -> Yellow
Hello Splunker, I hope you all are doing well. I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deploy...
See more...
Hello Splunker, I hope you all are doing well. I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deployment Server + Universal Forwarder]. I have used the Deployment server to push the inputs.conf to the designated universal forwarder which allocated on the domain controller server and enable the needed. then remove the wmi.conf and inputs.conf from the Windows TA-Add-On, and copy the rest to local folder and used the deployer to push the enhanced Windows TA to the search heads. As per the below screen from the official doc the indexer is conditional: Why should push the Add-on to the indexers even if there are an index time field extraction? As i am know the search head cluster will replicate all the knowledge bundle with the indexers so all the KOs will be replicated to the indexers and no need to push them, am i correct? Splunk Add-on for Microsoft Windows Thanks in advance!!
Please remove parameter master_uri = self and try it again. If you get the same error please execute splunk btool server list license --debug and share the output.
Hi @richgalloway , thanks for your input, yes i only gave the configuration for one index because i mainly rely on the default conf written above for all my indexes on the disk, plus this specific i...
See more...
Hi @richgalloway , thanks for your input, yes i only gave the configuration for one index because i mainly rely on the default conf written above for all my indexes on the disk, plus this specific index was the only one saturated, thus probably the issue here ? (please correct me if i'm wrong in this statement) For the volumes, i have one in my conf, but i'm not sure how it works and how it's used (i didn't write this conf file myself), i'll try to look into this subject. [volume:MyVolume]
path = $SPLUNK_DB Thanks !
Hi!
Thank you for your response
I made the change below to my query, including the "ERROR" key using regex, and it works properly:
index="idx_xxxx"
| rex field=_raw "\"ERROR\":\"(?<ERROR>[^\...
See more...
Hi!
Thank you for your response
I made the change below to my query, including the "ERROR" key using regex, and it works properly:
index="idx_xxxx"
| rex field=_raw "\"ERROR\":\"(?<ERROR>[^\"]+)\""
....
You've showed the configuration for a single index, but no doubt there are other indexes on the same disk. Those other indexes also consume disk space and help lead to a minFreeSpace situation. To ...
See more...
You've showed the configuration for a single index, but no doubt there are other indexes on the same disk. Those other indexes also consume disk space and help lead to a minFreeSpace situation. To better manage that, I recommend using volumes. Create a volume (in indexes.conf) that is about the size of the disk (or the amount you want to use) and make the indexes part of that volume (using volume:foo references). That will ensure the indexer considers the sizes of all indexes when deciding when to roll warm buckets.