All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log vol... See more...
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log volume thats getting copied is relatively small, the additional license usage is not a major issue.   It looks like scheduling search with collect is what fits the requirement we have
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can r... See more...
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can run a Universal Forwarder together with another Splunk Enterprise Component. Is this supported or is it at least officially documented somewhere?  
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environm... See more...
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environments and events still sent between them... that's a bit weird. But hey, it's your environment Actually, since you want to do some strange stuff with OS-level logs, it might be that one unique use case when it makes sense to install a UF alongside a normal Splunk Enterprise installation. That might be an easiest and least confusing solution.  
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search agains... See more...
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search against each tool's logs and then update it's "last seen" time in the KV store for the particular asset. I've attempted this a few ways, but I can't see to get it going.  I have the KV Store built with one column of last_seen times for one tool. But I am lost on how to update last_seen times for other tools for existing entries in the KV Store. Any guidance would be appreciated.  Thank you!
Different retention periods is the textbook case of splitting data between different indexes. So instead of copying you might consider simply sending some of your data to one index and the rest (the ... See more...
Different retention periods is the textbook case of splitting data between different indexes. So instead of copying you might consider simply sending some of your data to one index and the rest (the error logs) to another index. Otherwise - if you copy the events from one index to another - those events will be counted twice against your license which doesn't make much sense.
Hi @jpillai , do you want to divide logs between the two indexes or copy a part of logs (ERRORS) in the second one? in this second case you pay twice the license. Anyway, if you want to divide log... See more...
Hi @jpillai , do you want to divide logs between the two indexes or copy a part of logs (ERRORS) in the second one? in this second case you pay twice the license. Anyway, if you want to divide logs, you have to find the regex to identify logs fo index2 and put these props.conf and transforms.conf in your indexes or (if present) on Heavy Forwarders: # etc/system/local/transforms.conf [overrideindex] DEST_KEY =_MetaData:Index REGEX = . FORMAT = my_new_index #etc/system/local/props.conf [mysourcetype] TRANSFORMS-index = overrideindex the REGEX to use is the one you identified. If instead you want to copy ERROR logs in the second index, you can use the collect command, but you pay twice the license (using the original sourcetype) or you must use the stash sourcetype (in this case you don't pay twice the license. In othe words, you have to schedule a search like the following (if "ERROR" is the string to identify logs to copy): index=index1 "ERROR" | collect index=index2 My hint is to override index value for the logs that you want to index in index2. Ciao. Giuseppe  
OK. What does "doesn't work" mean here? And do you get any results from the initial tstats search? Stupid question - is your datamodel even accelerated?
If you uncheck the option, what's the number of BT you see?
Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I che... See more...
Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I checked Security Posture and then Incident Review, I found out that no notable has been created in ESS. Checked the scheduler and correlation searches were run successfully. Also, I tried creating an ad-hoc notable but though Splunk messaged me it was created successfully, I had nothing in my incident review dashboard. Everything else (log ingestion, search, regex...) are working fine. I've been checking the logs for the past few hours but I still have not found anything regarding this issue. I also tried redeploying 7.2.0 but no luck. Any ideas?
Knowing that its a very very old thread, still at this hour we are stuck with the same issue. So we are trying to send the data from Splunk heavy forwarder to a third party destination and the connec... See more...
Knowing that its a very very old thread, still at this hour we are stuck with the same issue. So we are trying to send the data from Splunk heavy forwarder to a third party destination and the connectivity looks fine but the outputs, props & transforms config that we have done does not seem to be working. I need an urgent help at this case if somebody can ??? This is how our config looks like now: outputs.conf: [tcpout] defaultGroup = nothing disabled = false   [tcpout:datab] server = x.x.x.x:xxxx sendCookedData = false compressed = false   props.conf [host::x.x.x.x] TRANSFORMS-x.x.x.x = route_datab #SHOULD_LINEMERGE = false     transforms.conf [route_datab] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = datab
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another i... See more...
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another index, say index2 continuously.  The intention here is to have index2 with a higher retention so we can have access to the error logs for longer period.   What is the best way to implement such a mechanism. Its okay to run this job every day, or every 6 hour or so. Would be best to retain all fields and field extractions in logs in target index as well.  
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 S... See more...
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 Search head 4 Indexer peers (Multisite Cluster). All hosted on RedHat Virtual Machines. Usecase: On all Linux systems (including Splunk itself) are some sources defined for ES and some sources for normal Splunk Enterprise indexes. E.g.: /var/log/secure - ES (Index: linux_security) /var/log/audit/audit.log - ES (Index: linux_security) /var/log/dnf.log - Splunk Enterprise (Index: linux_server) /var/log/bali/rebootreq.log - Splunk Enterprise (Index: linux_server) Problem: The Routing of those logs from the collecting tier (Universal Forwarder, Heavy Forwarder) is fine, because those components have both clusters as output groups defined including props / transforms config.  On Search heads there are only the search peers defined as output group (ES Search head --> ES Indexer Cluster, Splunk Enterprise Search head --> Splunk Enterprise Cluster). This is due to several summary searches and inputs from the Search head, im not able to adjust the routing like we do on the Heavy Forwarder because of the frequent changes made my powerusers. That is working fine so far except for the sources that require to be sent to the opposite cluster. Same for the logs directly on the Indexer Tier, the defined logs requires to get sent to the other cluster. So simplified: The log /var/log/secure on Splunk Enterprise Cluster Search head / Indexer needs to be sent to ES Cluster Indexer. The log /var/log/dnf.log on the ES Cluster Search head / Indexer needs to be sent to the Splunk Enterprise Indexer. What i have done already: Configured both Indexer Clusters to sent data to each other based on the specific index in outputs.conf. With this the events are now available in the correct cluster, but are also available as duplicates in their source cluster. I try to get rid of the source events! Splunk Enterprise Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_secure forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:es_cluster] server = LINUXSPLIXPRD50.roseninspection.net:9993, LINUXSPLIXPRD51.roseninspection.net:9993, LINUXSPLIXPRD52.roseninspection.net:9993,LINUXSPLIXPRD53.roseninspection.net:9993 ES Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_server forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:rosen_cluster] server = LINUXSPLIXPRD01.roseninspection.net:9993, LINUXSPLIXPRD02.roseninspection.net:9993, LINUXSPLIXPRD03.roseninspection.net:9993,LINUXSPLIXPRD04.roseninspection.net:9993 Additionally i tried to setup props.conf / transforms.conf like we do on HF to catch at least events from Search head and send them to the correct _TCP_ROUTING queue but without any success. I guess because they got parsed already on the Search head. Splunk Enterprise props.conf: [linux_secure] ... SHOULD_LINEMERGE = False TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS = TRANSFORMS-routingLinuxSecure = default_es_cluster Splunk Enterprise transforms.conf: [default_es_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = es_cluster REGEX = . SOURCE_KEY = _raw ES props.conf: [rhel_dnf_log] ... SHOULD_LINEMERGE = True TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q TRANSFORMS-routingLinuxDNF = default_rosen_cluster ES transforms.conf: [default_rosen_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = rosen_cluster REGEX = . SOURCE_KEY = _raw Example: Source: /var/log/dnf.log _time _raw host source index splunk_server count 2024-09-10 12:07:21 2024-09-10T12:07:21+0000 DDEBUG timer: config: 3 ms linuxsplixprd51.roseninspection.net (Indexer ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd51.roseninspection.net 2 2024-09-11 12:24:31 2024-09-11T10:24:31+0000 DDEBUG timer: config: 4 ms linuxsplixprd01.roseninspection.net (Indexer Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-10 13:15:04 2024-09-10T11:15:04+0000 DDEBUG timer: config: 3 ms linuxsplshprd50.roseninspection.net (Search head ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd50.roseninspection.net 2 2024-09-10 13:22:53 2024-09-10T11:22:53+0000 DDEBUG Base command: makecache linuxsplshprd01.roseninspection.net  (Search head Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-11 11:55:51 2024-09-11T09:55:51+0000 DEBUG cachedir: /var/cache/dnf kuluxsplhfprd01.roseninspection.net (Heavy Forwarder) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 Any idea how i can achieve to get rid of those duplicate events at the source cluster (last_chance)?
Hi, Is there any Splunk Technology Add-on (TA) for Dell Unity storage? suggestions pls. I see Dell PowerMax and Dell PowerScale (earlier Dell Isilon) only in Splunkbase.
      | tstats summariesonly=true values(DNS.query) as query FROM datamodel="Network_Resolution_DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_ti... See more...
      | tstats summariesonly=true values(DNS.query) as query FROM datamodel="Network_Resolution_DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_time) as last_time by query | eval gap=(last_time - _time) | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime       Sorry when i have copied i pasted wrong datamodel. This is CIM model, but i duplicate this model and add some additional fields, but for this query i need only field query and time. Original query is from  https://www.splunk.com/en_us/blog/security/detect-hunt-dns-exfiltration.html?locale=en_us
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.googl... See more...
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.google.comwww.google.com  The EUM is reachable.EUM-processor: version-'24.4.0.0', commit-cd:XXXXXXXXXXXXXXb, build-release/24.4.0.next #24.4.0-35342, timestamp=2024-05-02 01:18:33 The backend is Microsoft SharePoint. CSP has added both the CDN and EUM servers. Regards, Khalid
Hi @KhalidAlharthi , when you'll give sufficient disk space, indexers should be automatically aligned, even if some time will be required. You could check the replication status, after some time fr... See more...
Hi @KhalidAlharthi , when you'll give sufficient disk space, indexers should be automatically aligned, even if some time will be required. You could check the replication status, after some time from the Cluster Master. Cluster Master gives you the feature to force rebalancing. Ciao. Giuseppe
On a Linux HF, was seeing this error in 9.0.5.1 and now in 9.2.1 as well.  it worked after disabling the set "persistentQueueSize"  parameter in inputs.conf.  This is the only change we did.  Than... See more...
On a Linux HF, was seeing this error in 9.0.5.1 and now in 9.2.1 as well.  it worked after disabling the set "persistentQueueSize"  parameter in inputs.conf.  This is the only change we did.  Thank you. 
how can i solve it the disk volume high but how can i ensure the data can be aligned is there commands or something to check ?
Hi @KhalidAlharthi , the indexing queue is full, probably because you don't have enough disk space or there are too data for the resources Indexers have. Ciao. Giuseppe
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to reali... See more...
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to realign data and sometimes it's better to perform a rolling restar. Ciao. Giuseppe