All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This may be a relevant source for additional troubleshooting: Solved: What's the best way to get Windows Perfmon data in... - Splunk Community
Tcpdump shows syslog coming from everything except our hosts.  I have tried udp/514 and tcp 1514.  Neither show up.  Everything else does show up.  When we had this on a Windows server there was no i... See more...
Tcpdump shows syslog coming from everything except our hosts.  I have tried udp/514 and tcp 1514.  Neither show up.  Everything else does show up.  When we had this on a Windows server there was no issue, we didn't have to do anything special - it was coming over on udp/514. What is the recommended method for ingesting syslog?  We are a small shop and have never had issues with this method in the past. Also, what distro would you recommend?  This is a new install, so it wouldn't be a stretch to rebuild it.
@mooree  You write: " All other logs and events are getting through fine.  " these are  (other  - non-metric) logs from that 2022 server?
We apply a range of GPO settings to get us close to a CIS Level One hardening. This does usually include the Windows Firewall, but it's set to off where it needs to be and it's off here. 
Thanks for the thoughts - I've re-checked both and: inputs all good and showing  in the btool output. All other logs and events are getting through fine.    
If you have one or a few columns in your table, you could use the substr function in your search to set a maximum number of characters.   E.g. to truncate the field "col" to 100 characters. <yo... See more...
If you have one or a few columns in your table, you could use the substr function in your search to set a maximum number of characters.   E.g. to truncate the field "col" to 100 characters. <your search> | eval col = substr(col,0,100)
we have a table with large text values . those values need to be truncated to single line
Hi @Guido.Bachmann, Thanks for asking your question on the Community. It's been a few days with no reply, have you found a way to do this yourself you can share? If you are still looking for hel... See more...
Hi @Guido.Bachmann, Thanks for asking your question on the Community. It's been a few days with no reply, have you found a way to do this yourself you can share? If you are still looking for help, you can reach out to your AppD Rep, or contact AppD Support (www.appdynamics.com/support)
I am not familiar with Splunk on Docker, so I don't have any experience that will be useful here.  Some refs you may find useful:  Architecture | docker-splunk Navigation | docker-splunk Forw... See more...
I am not familiar with Splunk on Docker, so I don't have any experience that will be useful here.  Some refs you may find useful:  Architecture | docker-splunk Navigation | docker-splunk Forwarding data into indexer - Splunk Community (Similar question)
Events will be timestamped so perhaps subsequent searches are finding events in the same time frame which weren't present when the summary index was created. Have a look at the _indextime field for t... See more...
Events will be timestamped so perhaps subsequent searches are finding events in the same time frame which weren't present when the summary index was created. Have a look at the _indextime field for the events to see if there is a spread which would account for this. Also, have a look to see if your events have been duplicated in your subsequent searches. Other things you could check is whether the data in your summary index is correct (for the event which were present at the time they were added to the summary index.
I don't see the attachment.  Have you looked at the index = _internal for log_level IN (WARN, ERROR)
Can someone explain to me why when I run my base search, it has exponentially more Events in the same time frame compared to the summary index search (based on the base search). My main concern is... See more...
Can someone explain to me why when I run my base search, it has exponentially more Events in the same time frame compared to the summary index search (based on the base search). My main concern is if I am having gaps in log events or not. The summary index report runs every two hours looking back two hours. 
Hi ,   query :  how to wrap the text(column values) in a table splunk dashboard studio. query2 : how to expand and collapse row size in table splunk dashboard studio.
Hey , Just heard about CVE-2024-5535 on splunkforwarder agent 9.0.9 for Openssl 1.0.2zj , Is this a real one ? Do we need upgrade the agent now.   Thanks in advance.
Here is my role to allow a user to run a splunk health check. [role_check_health] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 dispatch_rest_to_indexers = enabled edit_dist_peer = enab... See more...
Here is my role to allow a user to run a splunk health check. [role_check_health] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 dispatch_rest_to_indexers = enabled edit_dist_peer = enabled edit_health = enabled edit_health_subset = enabled edit_monitor = enabled importRoles = power;user license_tab = enabled list_deployment_client = enabled list_deployment_server = enabled list_dist_peer = enabled list_forwarders = enabled list_health = enabled list_health_subset = enabled list_httpauths = enabled list_indexer_cluster = enabled list_indexerdiscovery = enabled list_search_head_clustering = enabled list_search_scheduler = enabled list_settings = enabled srchIndexesAllowed = _* srchMaxTime = 0 srchTimeEarliest = -1 srchTimeWin = -1
@JohnEGones, I have created Splunk Indexer and Splunk UF using docker-compose files. Both are running on the same host. We are able to forward the logs if we configure file monitoring in inputs.conf.... See more...
@JohnEGones, I have created Splunk Indexer and Splunk UF using docker-compose files. Both are running on the same host. We are able to forward the logs if we configure file monitoring in inputs.conf. But when I tried reading the data logs from TCP input, the data is not going to Indexer. Could you please share some debugging steps to troubleshoot this issue ?
Hi,  I cant show my logs because of privacy issues.  but for example: in INDEX 1  src=1.1.1.1 query=dns.com direcrion=  snd INDEX2 domain = dns.com category= education websit  
Could you clarify what the documentation meant when it said "secondary" and "warm standby primary", if a warm standby only has two servers? I am curious.  Just wanted to consider my options for back... See more...
Could you clarify what the documentation meant when it said "secondary" and "warm standby primary", if a warm standby only has two servers? I am curious.  Just wanted to consider my options for backups and present them.  
@user487596- I have same result for client info, it feels like this is how it designs. But I feel having actual IP would help. For username, in some events I do see actual username in my case. In so... See more...
@user487596- I have same result for client info, it feels like this is how it designs. But I feel having actual IP would help. For username, in some events I do see actual username in my case. In some events it is "-".
It's simple, really, since we know what precedes and follows the desired field.  Just put the known text into the regular expression and add a named capture group between them.  The pattern for the c... See more...
It's simple, really, since we know what precedes and follows the desired field.  Just put the known text into the regular expression and add a named capture group between them.  The pattern for the capture group can be either a non-greedy match of anything (.*?) or match anything that is not what follows the field ([^\|]+).