All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hazem , yes, there's a precedence in confoigurations at index time, but for custom apps it's related to the alphabetical precedence. Anyway, it should run because you have a duplicated configur... See more...
Hi @hazem , yes, there's a precedence in confoigurations at index time, but for custom apps it's related to the alphabetical precedence. Anyway, it should run because you have a duplicated configuration that isn't required. Ciao. Giuseppe
https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Manageacceleratedsearchsummaries#Restrictions_on_report_acceleration Since the search itself qualifies for acceleration, most probably y... See more...
https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Manageacceleratedsearchsummaries#Restrictions_on_report_acceleration Since the search itself qualifies for acceleration, most probably your user role either lacks capabilities to enable accelerations or write permissions for the report.
yeah i picked your solution.   could you please check your DM
did you mean that the configuration in (all_UF_outputs) will override the (all_splk_outputs) because the Capital Letter (U) has the highest precedence than lower (s) ?
did you mean that the configuration in (all_UF_outputs) will override the (all_splk_outputs) because the Capital Letter (U) has the highest precedence than lower (s) ?  
hi @Tzur  let me understand: you want to take the last value of "monitor" field or there's a rule? if the last value, you could try: <your_search> | stats last(monitor) AS monitor values... See more...
hi @Tzur  let me understand: you want to take the last value of "monitor" field or there's a rule? if the last value, you could try: <your_search> | stats last(monitor) AS monitor values(ip) AS ip values(other_fields) AS other_fields BY hostname if there' s a rule (e.g. if ip=1.2.3.4), you can try: <your_search> | stats values(eval(if(ip="1.2.3.4","v","x"))) AS monitor values(ip) AS ip values(other_fields) AS other_fields BY hostname Ciao. Giuseppe
Hi @hazem , let me understand: you have two apps containing the same indexers addressing, or different ones? if yes why? Anyway, it isn't correct because the configuration in the first overrides t... See more...
Hi @hazem , let me understand: you have two apps containing the same indexers addressing, or different ones? if yes why? Anyway, it isn't correct because the configuration in the first overrides the ones in the second. Could you share your outputs.conf? Ciao. Giuseppe
Hi @KhalidAlharthi , yes (I saw your other question!). let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.... See more...
Hi @KhalidAlharthi , yes (I saw your other question!). let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Gil , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @rmo23 , at first see if there is the way (I don't know very deeply ITSI) to enable as action the email sending. If not extract the search from this dashboard and create a custom alert. Ciao. ... See more...
Hi @rmo23 , at first see if there is the way (I don't know very deeply ITSI) to enable as action the email sending. If not extract the search from this dashboard and create a custom alert. Ciao. Giuseppe
Hi @KhalidAlharthi , let me understand: your fork is forwarding syslogs to the third party but not to Splunk, is it correct? have you a defaultGroup in outputs.conf? if yes, try to remove it. Ci... See more...
Hi @KhalidAlharthi , let me understand: your fork is forwarding syslogs to the third party but not to Splunk, is it correct? have you a defaultGroup in outputs.conf? if yes, try to remove it. Ciao. Giuseppe
hello all, if I have 2 apps deployed on  Splunk forwarder agent with outputs.conf file  first one(all_UF_outputs) to send logs to indexers' ips and the other(all_splk_outputs )to send logs to index... See more...
hello all, if I have 2 apps deployed on  Splunk forwarder agent with outputs.conf file  first one(all_UF_outputs) to send logs to indexers' ips and the other(all_splk_outputs )to send logs to indexers over the hostname. how I can confirm which one has the highest precedence?          
The site replication factor applies to *all* buckets (except thawed) so the cluster will create a third copy of all data, not just data that arrives after the change is made.
hi Indeed, thanks to ITSI, I can have data on the metrics, the status of my servers, active or inactive, I can predict the status of my infrastructure, etc. I just want to receive email alerts only ... See more...
hi Indeed, thanks to ITSI, I can have data on the metrics, the status of my servers, active or inactive, I can predict the status of my infrastructure, etc. I just want to receive email alerts only when my servers are inactive, I only see this status when I'm in ‘Entity Overview’ if it's possible to configure an email alert on it.
@tscroggins Thank you for your reply and help   i managed to forward the logs to linux server just to test the functionality and it's working fine i receieved the packets correctly in raw formats i... See more...
@tscroggins Thank you for your reply and help   i managed to forward the logs to linux server just to test the functionality and it's working fine i receieved the packets correctly in raw formats is there a possibility to route the data to another system with parsing of splunk i think this should be done from splunk indexers ..
Hi @KhalidAlharthi, If QRadar is receiving but not processing the data, you should probably contact IBM support. If IBM indicates the data is not in the correct format, the community can help with t... See more...
Hi @KhalidAlharthi, If QRadar is receiving but not processing the data, you should probably contact IBM support. If IBM indicates the data is not in the correct format, the community can help with transforming the output on the Splunk side. (See my response to your previous question.)
Hi @KhalidAlharthi, Forwarding filter settings--forwardedindex.*--are only valid globally. See "Index Filter Settings" at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf. If yo... See more...
Hi @KhalidAlharthi, Forwarding filter settings--forwardedindex.*--are only valid globally. See "Index Filter Settings" at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf. If you want to conditionally forward a set of indexes to every configured output, you can use this configuration. This is useful, for example, when an intermediate forwarder receives data from upstream forwarders that are outside your control, and you only need to forward a subset of events targeting specific indexes. If you have a dedicated heavy forwarder receiving data on 1517/udp, and you want to clone that data to a syslog destination, you can add the _SYSLOG_ROUTING setting to the input stanza: # inputs.conf [udp:1517] index = tmao connection_host = none no_priority_stripping = true no_appending_timestamp = true sourcetype = tmao_sourcetype _SYSLOG_ROUTING = send-to-remotesiem # outputs.conf [syslog:send-to-remotesiem] server = remotesiem:514 # Set type = tcp or type = udp; TLS is not supported. # If you use a hostname in the server setting, I recommend using a local # DNS resolver cache, i.e. bind (named), systemd-resolved, etc. type = udp # Use NO_PRI to indicate _raw already includes PRI header. priority = NO_PRI # syslogSourceType tells Splunk that tmao:log already has syslog # headers. If you do not use the syslogSourceType setting, Splunk # will prefix _raw with additional host and timestamp data. syslogSourceType = sourcetype::tmao_sourcetype # maxEventSize is the maximum payload for UDPv4. Use a value compatiable # with your receiver. maxEventSize = 65507 Splunk does not differentiate between RFC 3164 and RFC 5424, and _raw should be maintained and forwarded in its native format. This is aided by the inputs.conf no_priority_stripping and no_appending_timestamp settings, which may conflict with your source or source type's expectations for parsing _raw at index and search time. Regardless, Splunk will inject a malformed host value in _raw, e.g.: In: <0>Jun 9 11:15:00 host1 process[1234]: This is a syslog message. Out: <0> splunk Jun 9 11:15:00 host1 process[1234]: This is a syslog message. Your downstream receiver must handle the data appropriately. All of this can be mitigated using complex transforms to rewrite _raw into the format you need, but I recommend asking an additional question specific to that topic after you have syslog routing working. Note that Splunk's syslog output can block indexQueue, which also contains tcpout outputs, and prevent Splunk from indexing and forwarding data. The outputs.conf [syslog] stanza dropEventsOnQueueFull setting can help mitigate blocking at the expense of data loss. The syslog output queue size cannot be changed, and the only way to scale syslog output is to increase the server.conf [general] stanza parallelIngestionPipelines setting. Splunk wasn't designed specifically for syslog routing. If you need more control over syslog itself, you may want to use a syslog service like rsyslog or syslog-ng to receive and route the data to both Splunk and another downstream system. There are numerous third-party products with similar capability.
please note that the DR site did not exist once we implemented the Multi-site cluster so we decided to insert the below configuration site_replication_factor = origin:2,total:2 available_sites = si... See more...
please note that the DR site did not exist once we implemented the Multi-site cluster so we decided to insert the below configuration site_replication_factor = origin:2,total:2 available_sites = site1 which the cluster did not sync any data to the DR site which already did not exist at the beginning of the implementation. now the DR site will be up and we will install new 3 indexers in it.  we will reconfigure the cluster manager with the bellow conf to add one copy of data  to DR indexer so the question is all logs (20TB) will be transferred to DR site?or just realtime logs?   before installing DR indexers: site_replication_factor = origin:2, total:2 available_sites = site1 after installing DR indexers site_replication_factor = origin:2, total:3 available_sites = site1,site2  
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_r... See more...
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_route = send_tmao_to_remote_siem # transforms.conf [send_tmao_to_remote_siem] REGEX = . SOURCE_KEY = _MetaData:Index DEST_KEY = _SYSLOG_ROUTING FORMAT = remote_siem # outputs.conf [syslog:remote_siem] server = remotesiem:1234 sendCookedData = false  i have recieved the data by using tcpdump and packets are coming from HF to the third-party system   but there are not appear in the SIEM why is that any help ...?
The cluster will do what is necessary to meet the replication and search factors.  That may mean replicating 20TB of data to the other site.