All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @KhalidAlharthi, Forwarding filter settings--forwardedindex.*--are only valid globally. See "Index Filter Settings" at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf. If yo... See more...
Hi @KhalidAlharthi, Forwarding filter settings--forwardedindex.*--are only valid globally. See "Index Filter Settings" at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf. If you want to conditionally forward a set of indexes to every configured output, you can use this configuration. This is useful, for example, when an intermediate forwarder receives data from upstream forwarders that are outside your control, and you only need to forward a subset of events targeting specific indexes. If you have a dedicated heavy forwarder receiving data on 1517/udp, and you want to clone that data to a syslog destination, you can add the _SYSLOG_ROUTING setting to the input stanza: # inputs.conf [udp:1517] index = tmao connection_host = none no_priority_stripping = true no_appending_timestamp = true sourcetype = tmao_sourcetype _SYSLOG_ROUTING = send-to-remotesiem # outputs.conf [syslog:send-to-remotesiem] server = remotesiem:514 # Set type = tcp or type = udp; TLS is not supported. # If you use a hostname in the server setting, I recommend using a local # DNS resolver cache, i.e. bind (named), systemd-resolved, etc. type = udp # Use NO_PRI to indicate _raw already includes PRI header. priority = NO_PRI # syslogSourceType tells Splunk that tmao:log already has syslog # headers. If you do not use the syslogSourceType setting, Splunk # will prefix _raw with additional host and timestamp data. syslogSourceType = sourcetype::tmao_sourcetype # maxEventSize is the maximum payload for UDPv4. Use a value compatiable # with your receiver. maxEventSize = 65507 Splunk does not differentiate between RFC 3164 and RFC 5424, and _raw should be maintained and forwarded in its native format. This is aided by the inputs.conf no_priority_stripping and no_appending_timestamp settings, which may conflict with your source or source type's expectations for parsing _raw at index and search time. Regardless, Splunk will inject a malformed host value in _raw, e.g.: In: <0>Jun 9 11:15:00 host1 process[1234]: This is a syslog message. Out: <0> splunk Jun 9 11:15:00 host1 process[1234]: This is a syslog message. Your downstream receiver must handle the data appropriately. All of this can be mitigated using complex transforms to rewrite _raw into the format you need, but I recommend asking an additional question specific to that topic after you have syslog routing working. Note that Splunk's syslog output can block indexQueue, which also contains tcpout outputs, and prevent Splunk from indexing and forwarding data. The outputs.conf [syslog] stanza dropEventsOnQueueFull setting can help mitigate blocking at the expense of data loss. The syslog output queue size cannot be changed, and the only way to scale syslog output is to increase the server.conf [general] stanza parallelIngestionPipelines setting. Splunk wasn't designed specifically for syslog routing. If you need more control over syslog itself, you may want to use a syslog service like rsyslog or syslog-ng to receive and route the data to both Splunk and another downstream system. There are numerous third-party products with similar capability.
please note that the DR site did not exist once we implemented the Multi-site cluster so we decided to insert the below configuration site_replication_factor = origin:2,total:2 available_sites = si... See more...
please note that the DR site did not exist once we implemented the Multi-site cluster so we decided to insert the below configuration site_replication_factor = origin:2,total:2 available_sites = site1 which the cluster did not sync any data to the DR site which already did not exist at the beginning of the implementation. now the DR site will be up and we will install new 3 indexers in it.  we will reconfigure the cluster manager with the bellow conf to add one copy of data  to DR indexer so the question is all logs (20TB) will be transferred to DR site?or just realtime logs?   before installing DR indexers: site_replication_factor = origin:2, total:2 available_sites = site1 after installing DR indexers site_replication_factor = origin:2, total:3 available_sites = site1,site2  
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_r... See more...
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_route = send_tmao_to_remote_siem # transforms.conf [send_tmao_to_remote_siem] REGEX = . SOURCE_KEY = _MetaData:Index DEST_KEY = _SYSLOG_ROUTING FORMAT = remote_siem # outputs.conf [syslog:remote_siem] server = remotesiem:1234 sendCookedData = false  i have recieved the data by using tcpdump and packets are coming from HF to the third-party system   but there are not appear in the SIEM why is that any help ...?
The cluster will do what is necessary to meet the replication and search factors.  That may mean replicating 20TB of data to the other site.
this is part of one table hostname |  monitor | ip |  other fields... aaa |v | .... aaa |x | ... bbb | v | ... how can change the value of 'x' to 'v'  in the second row (when there is two diff... See more...
this is part of one table hostname |  monitor | ip |  other fields... aaa |v | .... aaa |x | ... bbb | v | ... how can change the value of 'x' to 'v'  in the second row (when there is two diffrent value save it as V) i should save the ip because it can be different, the other fields also can be different the main problem it that I use join to this table by hostname which relies on the value of montior and something it got X when the real value is V maybe  can I use join if there is V at monitor? hope you undersatnd. 
Hi @richgalloway  thank you for your reply  you said that the cluster immediately will create additional copies of all hot, warm, and cold buckets.  Do you mean that the additional copy will be cop... See more...
Hi @richgalloway  thank you for your reply  you said that the cluster immediately will create additional copies of all hot, warm, and cold buckets.  Do you mean that the additional copy will be copied to the DR site? but if I have data in the main site like 8TB in hot/warm and 12TB for cold .the cluster will replicate all  8TB and 12 TB logs to DR indexers?    
Once the RF is increased, the cluster immediately will create additional copies of all hot, warm, and cold buckets.
The coldPath setting must be defined and the location must exist.  It's not possible (and not advised to try) to have a different configuration for the "DR" indexers. To avoid using the cold path, c... See more...
The coldPath setting must be defined and the location must exist.  It's not possible (and not advised to try) to have a different configuration for the "DR" indexers. To avoid using the cold path, create a script that deletes buckets and define it as the warmToColdScript for the index(es).   You also could assign the coldPath to a volume and make that volume large enough for a single bucket so cold buckets are frozen almost immediately.
Hi Team, We are using modular input to ingest the logs into splunk, we have checkpoint file, but we see duplicate logs are ingested into splunk. How to eliminate duplicates? application from which ... See more...
Hi Team, We are using modular input to ingest the logs into splunk, we have checkpoint file, but we see duplicate logs are ingested into splunk. How to eliminate duplicates? application from which the logs are ingested - Tyk analytics
hello all,   we have multi-site cluster master and  we do not want a cold mount in the DR indexers. is it applicable?   If  indexer  hits hot/warm retention and not found the cold path will dele... See more...
hello all,   we have multi-site cluster master and  we do not want a cold mount in the DR indexers. is it applicable?   If  indexer  hits hot/warm retention and not found the cold path will delete the data  ?
  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year.with the below configuration: site_replication_factor = origin:2,total:2 site_s... See more...
  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year.with the below configuration: site_replication_factor = origin:2,total:2 site_search_factor = origin:1,total:1 now we have decided to establish a disaster recovery site with an additional 3 indexers. The expected configuration for the new DR site will be as follows: site_replication_factor = origin:2, total:3 site_search_factor = origin:1, total:2 I would like to address the question about how replication will work once the DR indexer is configured? will the replication process start syncing all logs in the hot, warm and cold buckets or will start real-time hot  logs only??
Thanks @gcusello  is it possible to define it like what you did    [TMAO_sourcetype]     and if yes sourcetype of data source right?
Hi @KhalidAlharthi , in props.conf, you have to use only the sourcetype of the logs to send to syslog. If they are more than one, put more stanzas in props. # props.conf [TMAO_sourcetype] TRANSFO... See more...
Hi @KhalidAlharthi , in props.conf, you have to use only the sourcetype of the logs to send to syslog. If they are more than one, put more stanzas in props. # props.conf [TMAO_sourcetype] TRANSFORMS-send_foo_to_remote_siem = send_foo_to_remote_siem # transforms.conf [send_foo_to_remote_siem] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = remote_siem # outputs.conf [tcpout:remote_siem] server = remotesiem:1234 sendCookedData = false AS I said, check the exact sourcetype name: I recently solved an issue near your, where the error was the sourcetype exact name. Ciao. Giuseppe
Hi @rmo23 , as also @yuanliu said, you should share more details about your infrastructure. Anyway, in ITSI there's an asset inventory that should be complete (otherwise you have a very bigger issu... See more...
Hi @rmo23 , as also @yuanliu said, you should share more details about your infrastructure. Anyway, in ITSI there's an asset inventory that should be complete (otherwise you have a very bigger issue!). So,  you could use the lookup containing these asset (I don' t remember its name) and run a search like the following: | tstats count where index=* BY host | append [ | inputlookup your_asset_lookup | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
by this you are sending all the event to remote siem    i need to send just TMAO trend micro  soo what the best approach to do this using syslog ...
Hi @shimada-k , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @shimada-k , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @irisk , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @KhalidAlharthi , does it run your solution? I found an error: the transformation is missed in the props.conf. I'm not sure that you can put the TRANSFORMS in Default stanza and I don't like to... See more...
Hi @KhalidAlharthi , does it run your solution? I found an error: the transformation is missed in the props.conf. I'm not sure that you can put the TRANSFORMS in Default stanza and I don't like to use a regex on index field, I'd use a different approach: # props.conf [your_sourcetype] TRANSFORMS-send_foo_to_remote_siem = send_foo_to_remote_siem # transforms.conf [send_foo_to_remote_siem] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = remote_siem # outputs.conf [tcpout:remote_siem] server = remotesiem:1234 sendCookedData = false then put attention to the sourcetype: you must be sure that you are using, in the props.conf, the original sourcetype and not a transformed (by the add-on) one.  Ciao. Giuseppe
I used splunk web interface, went to reports > edit acceleration for the specific report > clicked save and it says "This search cannot be accelerated". Please find screenshot in the other reply.  
Splunk says "This search cannot be accelerated" when I go to enable acceleration for the report and hit save,