Per the DOCS, here: Install the Splunk Add-on for Windows - Splunk Documentation and for metric here: https://docs.splunk.com/Documentation/AddOns/released/Windows/Configuration#Collect_perfmon_data...
See more...
Per the DOCS, here: Install the Splunk Add-on for Windows - Splunk Documentation and for metric here: https://docs.splunk.com/Documentation/AddOns/released/Windows/Configuration#Collect_perfmon_data_and_wmi:uptime_data_in_metric_index You should ensure you have a metrics index defined, and install it accordingly at every layer to ensure you're getting the data you need.
How can I match the IPs from csv file with the CIDR ranges in another csv? If no CIDR matches, I want to return "NoMatch" and if proper IP and CIDR match then return the CIDR
I tried the appro...
See more...
How can I match the IPs from csv file with the CIDR ranges in another csv? If no CIDR matches, I want to return "NoMatch" and if proper IP and CIDR match then return the CIDR
I tried the approach below, but I keep getting "No Match" for all entries, even though I have proper CIDR ranges:
"| inputlookup IP_add.csv
| rename "IP Address" as ip
| appendcols
[| inputlookup cidr.csv]
| foreach cidr
[ eval match=if(cidrmatch('<<FIELD>>', ip), cidr, "No Match")]"
Note: I can't use join as I don't have IP field or ips in the cidr csv
any help would be greatly appreciated. Thank you
Yes - It's only perfmon data we're not getting. Splunk internals and event log events are both OK. AFAIK (and intended) these are not being collected as metrics. I'd been through the article you re...
See more...
Yes - It's only perfmon data we're not getting. Splunk internals and event log events are both OK. AFAIK (and intended) these are not being collected as metrics. I'd been through the article you referenced, and heve now been back and checked my workings. We've not installed the Windows add-on to every layer yet - I've just used bit of inputs.conf from it initially to get the data to look at and will then go back to all the clever bit once the basics are working.
We had this issue with some of our devices for syslog data, the work around is to use a syslog server. If you are comfortable with Linux, then standup a server with rsyslog, do the appropriate config...
See more...
We had this issue with some of our devices for syslog data, the work around is to use a syslog server. If you are comfortable with Linux, then standup a server with rsyslog, do the appropriate configs and then put a UF on the host and have it monitor the log folder/files, etc.
Tcpdump shows syslog coming from everything except our hosts. I have tried udp/514 and tcp 1514. Neither show up. Everything else does show up. When we had this on a Windows server there was no i...
See more...
Tcpdump shows syslog coming from everything except our hosts. I have tried udp/514 and tcp 1514. Neither show up. Everything else does show up. When we had this on a Windows server there was no issue, we didn't have to do anything special - it was coming over on udp/514. What is the recommended method for ingesting syslog? We are a small shop and have never had issues with this method in the past. Also, what distro would you recommend? This is a new install, so it wouldn't be a stretch to rebuild it.
We apply a range of GPO settings to get us close to a CIS Level One hardening. This does usually include the Windows Firewall, but it's set to off where it needs to be and it's off here.
Thanks for the thoughts - I've re-checked both and: inputs all good and showing in the btool output. All other logs and events are getting through fine.
If you have one or a few columns in your table, you could use the substr function in your search to set a maximum number of characters.
E.g. to truncate the field "col" to 100 characters.
<yo...
See more...
If you have one or a few columns in your table, you could use the substr function in your search to set a maximum number of characters.
E.g. to truncate the field "col" to 100 characters.
<your search>
| eval col = substr(col,0,100)
Hi @Guido.Bachmann,
Thanks for asking your question on the Community. It's been a few days with no reply, have you found a way to do this yourself you can share? If you are still looking for hel...
See more...
Hi @Guido.Bachmann,
Thanks for asking your question on the Community. It's been a few days with no reply, have you found a way to do this yourself you can share? If you are still looking for help, you can reach out to your AppD Rep, or contact AppD Support (www.appdynamics.com/support)
I am not familiar with Splunk on Docker, so I don't have any experience that will be useful here. Some refs you may find useful: Architecture | docker-splunk Navigation | docker-splunk Forw...
See more...
I am not familiar with Splunk on Docker, so I don't have any experience that will be useful here. Some refs you may find useful: Architecture | docker-splunk Navigation | docker-splunk Forwarding data into indexer - Splunk Community (Similar question)
Events will be timestamped so perhaps subsequent searches are finding events in the same time frame which weren't present when the summary index was created. Have a look at the _indextime field for t...
See more...
Events will be timestamped so perhaps subsequent searches are finding events in the same time frame which weren't present when the summary index was created. Have a look at the _indextime field for the events to see if there is a spread which would account for this. Also, have a look to see if your events have been duplicated in your subsequent searches. Other things you could check is whether the data in your summary index is correct (for the event which were present at the time they were added to the summary index.
Can someone explain to me why when I run my base search, it has exponentially more Events in the same time frame compared to the summary index search (based on the base search). My main concern is...
See more...
Can someone explain to me why when I run my base search, it has exponentially more Events in the same time frame compared to the summary index search (based on the base search). My main concern is if I am having gaps in log events or not. The summary index report runs every two hours looking back two hours.
Hi , query : how to wrap the text(column values) in a table splunk dashboard studio. query2 : how to expand and collapse row size in table splunk dashboard studio.
Hey , Just heard about CVE-2024-5535 on splunkforwarder agent 9.0.9 for Openssl 1.0.2zj , Is this a real one ? Do we need upgrade the agent now. Thanks in advance.
Here is my role to allow a user to run a splunk health check.
[role_check_health]
cumulativeRTSrchJobsQuota = 0
cumulativeSrchJobsQuota = 0
dispatch_rest_to_indexers = enabled
edit_dist_peer = enab...
See more...
Here is my role to allow a user to run a splunk health check.
[role_check_health]
cumulativeRTSrchJobsQuota = 0
cumulativeSrchJobsQuota = 0
dispatch_rest_to_indexers = enabled
edit_dist_peer = enabled
edit_health = enabled
edit_health_subset = enabled
edit_monitor = enabled
importRoles = power;user
license_tab = enabled
list_deployment_client = enabled
list_deployment_server = enabled
list_dist_peer = enabled
list_forwarders = enabled
list_health = enabled
list_health_subset = enabled
list_httpauths = enabled
list_indexer_cluster = enabled
list_indexerdiscovery = enabled
list_search_head_clustering = enabled
list_search_scheduler = enabled
list_settings = enabled
srchIndexesAllowed = _*
srchMaxTime = 0
srchTimeEarliest = -1
srchTimeWin = -1
@JohnEGones, I have created Splunk Indexer and Splunk UF using docker-compose files. Both are running on the same host. We are able to forward the logs if we configure file monitoring in inputs.conf....
See more...
@JohnEGones, I have created Splunk Indexer and Splunk UF using docker-compose files. Both are running on the same host. We are able to forward the logs if we configure file monitoring in inputs.conf. But when I tried reading the data logs from TCP input, the data is not going to Indexer. Could you please share some debugging steps to troubleshoot this issue ?