Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. ...
See more...
Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. There might also be an issue with the prefix itself. We don't see the raw data so there might be any number of whitespace characters there. (And escaping quotes is not needed; but shouldn't be harmful in this case)
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice ...
See more...
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice regarding Universal Forwarders? Set up two universal forwarders on same app server, or, set up and configure a single UF to forward to both Ent Servers?
If you just want to filter on "*172.21.255.8*" then why do you have all that extra stuff in the regex? Try this simpler version REGEX = ,172\.21\.255\.8:\d+,
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created ...
See more...
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created when Splunk is writing from warm to cold but not much more than that. So lets say that Splunk is writing buckets and its like 100 GB worth of buckets if lets say you had 3 indexers with buckets that were 3 months old and you forced all your buckets from these 3 indexers to move to cold. As long as they had write access to your storage should there even be inflight dbs? or is that too much to write at once and Splunk is like nah I dont think so. And therefore writes some data but the rest it just make some error log and calls it a day. So is there a limit as to how much can be writen to cold at one time? If it is writing and that write gets interupted then why does it see that and just resume where it left off to complete the transfer? I know there are logs but seems to me it should be like watching a movie with internet I should be able to pause and then resume when I'm ready or better yet if 100 buckets start writing and some technical issue happens at 1/4 or halfway then that bucket writing should either cancel full stop and tell me in plain language or pause and resume when connection is back up.
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points a...
See more...
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points and RAID configuration for linux servers.
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this? [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|Get...
See more...
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this? [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|GetNavigationCatalogFromSFTP com.demandware.api.net.SFTPClient Sites-ks_jp_rt-Site JOB faadaf233c 09beff21183cec83f264904132 5766054387038857216 - SFTP connect operation failed
Hello everyone,
I'm trying to filter out some logs in the IA-WindowsSecurity Application.
The indexed values are when:
- The EventCode=4634 AND the Security_ID="*$"
I created an app deployed ...
See more...
Hello everyone,
I'm trying to filter out some logs in the IA-WindowsSecurity Application.
The indexed values are when:
- The EventCode=4634 AND the Security_ID="*$"
I created an app deployed on an index with the following props and transforms config:
Props.conf
[WinEventLog]
TRANSFORMS-remove_computer_logoff = remove_logoff
Transforms.conf
[remove_logoff]
REGEX =
DEST_KEY = queue
FORMAT = nullQueue
I made the following regex for matching the event:
- EventCode=4634
- Security_ID=".*\$$"
I'm not sure how to correctly "put together" these two REGEXES.
I did a lot of testing with different types of regexes (in PCRE Format), but I wasn't able to make it work.
Can someone please help me?
Thanks in advance
Try switching the last two lines | addtotals col=t row=f labelfield=Index label="Overall Total"
| stats list(SourceType) as "Source-Type", list(GB) as GB by Index
index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage
| stats sum(b) as bytes by st , idx
| eval GB=round(bytes/(1024*1024*1024),6)
| table st, idx, GB
| sort -GB
| eventstat...
See more...
index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage
| stats sum(b) as bytes by st , idx
| eval GB=round(bytes/(1024*1024*1024),6)
| table st, idx, GB
| sort -GB
| eventstats sum(GB) as total
| eval Percentage=round((GB/total)*100,6)
| rename st as SourceType
| rename idx as Index
| stats list(SourceType) as "Source-Type", list(GB) as GB by Index
| addtotals col=t row=f labelfield=Index label="Overall Total"
Please find teh below sample values Index Source-Type GB aws_vpcflow aws:vpcflow 10 aws:cloudwatchlogs:vpcflow 20 windows windows:fluentd 30 windows 40 WinEventLog:Security...
See more...
Please find teh below sample values Index Source-Type GB aws_vpcflow aws:vpcflow 10 aws:cloudwatchlogs:vpcflow 20 windows windows:fluentd 30 windows 40 WinEventLog:Security 50 cloud cloud_watch 60 aws_cloud 70
It is not clear from this what you are expecting as your output. How do the failure_reason lines relate to the status lines? Please can you share some actual events (anonymised as appropriate), pref...
See more...
It is not clear from this what you are expecting as your output. How do the failure_reason lines relate to the status lines? Please can you share some actual events (anonymised as appropriate), preferably in a code block?
If this is indeed your actual event and those are your actual props settings, Splunk will never find the timestamp because you have a very low lookahead set. There is no timestamp within first 20 cha...
See more...
If this is indeed your actual event and those are your actual props settings, Splunk will never find the timestamp because you have a very low lookahead set. There is no timestamp within first 20 characters of the event. Additionally - are you using indexed extractions?
Hi @altink , if you are the customer, you can indicate some reference people to open cases, usually they are contractual reference ,but it's better to indicate one contractual reference and two or t...
See more...
Hi @altink , if you are the customer, you can indicate some reference people to open cases, usually they are contractual reference ,but it's better to indicate one contractual reference and two or three operational people. To do this, one of the already active contractual reference people must open a case to Splunk Support to indicate the other reference people to open cases. Ciao. Giuseppe
Tank You @gcusello We (Unionbank) are the customer. It seems it is a contractual matter. But I would like Splunk support to put some message about this after logon, instead of not being able ...
See more...
Tank You @gcusello We (Unionbank) are the customer. It seems it is a contractual matter. But I would like Splunk support to put some message about this after logon, instead of not being able to select a field. It is confusing. regards Altin