All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I che... See more...
Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I checked Security Posture and then Incident Review, I found out that no notable has been created in ESS. Checked the scheduler and correlation searches were run successfully. Also, I tried creating an ad-hoc notable but though Splunk messaged me it was created successfully, I had nothing in my incident review dashboard. Everything else (log ingestion, search, regex...) are working fine. I've been checking the logs for the past few hours but I still have not found anything regarding this issue. I also tried redeploying 7.2.0 but no luck. Any ideas?
Knowing that its a very very old thread, still at this hour we are stuck with the same issue. So we are trying to send the data from Splunk heavy forwarder to a third party destination and the connec... See more...
Knowing that its a very very old thread, still at this hour we are stuck with the same issue. So we are trying to send the data from Splunk heavy forwarder to a third party destination and the connectivity looks fine but the outputs, props & transforms config that we have done does not seem to be working. I need an urgent help at this case if somebody can ??? This is how our config looks like now: outputs.conf: [tcpout] defaultGroup = nothing disabled = false   [tcpout:datab] server = x.x.x.x:xxxx sendCookedData = false compressed = false   props.conf [host::x.x.x.x] TRANSFORMS-x.x.x.x = route_datab #SHOULD_LINEMERGE = false     transforms.conf [route_datab] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = datab
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another i... See more...
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another index, say index2 continuously.  The intention here is to have index2 with a higher retention so we can have access to the error logs for longer period.   What is the best way to implement such a mechanism. Its okay to run this job every day, or every 6 hour or so. Would be best to retain all fields and field extractions in logs in target index as well.  
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 S... See more...
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 Search head 4 Indexer peers (Multisite Cluster). All hosted on RedHat Virtual Machines. Usecase: On all Linux systems (including Splunk itself) are some sources defined for ES and some sources for normal Splunk Enterprise indexes. E.g.: /var/log/secure - ES (Index: linux_security) /var/log/audit/audit.log - ES (Index: linux_security) /var/log/dnf.log - Splunk Enterprise (Index: linux_server) /var/log/bali/rebootreq.log - Splunk Enterprise (Index: linux_server) Problem: The Routing of those logs from the collecting tier (Universal Forwarder, Heavy Forwarder) is fine, because those components have both clusters as output groups defined including props / transforms config.  On Search heads there are only the search peers defined as output group (ES Search head --> ES Indexer Cluster, Splunk Enterprise Search head --> Splunk Enterprise Cluster). This is due to several summary searches and inputs from the Search head, im not able to adjust the routing like we do on the Heavy Forwarder because of the frequent changes made my powerusers. That is working fine so far except for the sources that require to be sent to the opposite cluster. Same for the logs directly on the Indexer Tier, the defined logs requires to get sent to the other cluster. So simplified: The log /var/log/secure on Splunk Enterprise Cluster Search head / Indexer needs to be sent to ES Cluster Indexer. The log /var/log/dnf.log on the ES Cluster Search head / Indexer needs to be sent to the Splunk Enterprise Indexer. What i have done already: Configured both Indexer Clusters to sent data to each other based on the specific index in outputs.conf. With this the events are now available in the correct cluster, but are also available as duplicates in their source cluster. I try to get rid of the source events! Splunk Enterprise Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_secure forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:es_cluster] server = LINUXSPLIXPRD50.roseninspection.net:9993, LINUXSPLIXPRD51.roseninspection.net:9993, LINUXSPLIXPRD52.roseninspection.net:9993,LINUXSPLIXPRD53.roseninspection.net:9993 ES Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_server forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:rosen_cluster] server = LINUXSPLIXPRD01.roseninspection.net:9993, LINUXSPLIXPRD02.roseninspection.net:9993, LINUXSPLIXPRD03.roseninspection.net:9993,LINUXSPLIXPRD04.roseninspection.net:9993 Additionally i tried to setup props.conf / transforms.conf like we do on HF to catch at least events from Search head and send them to the correct _TCP_ROUTING queue but without any success. I guess because they got parsed already on the Search head. Splunk Enterprise props.conf: [linux_secure] ... SHOULD_LINEMERGE = False TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS = TRANSFORMS-routingLinuxSecure = default_es_cluster Splunk Enterprise transforms.conf: [default_es_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = es_cluster REGEX = . SOURCE_KEY = _raw ES props.conf: [rhel_dnf_log] ... SHOULD_LINEMERGE = True TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q TRANSFORMS-routingLinuxDNF = default_rosen_cluster ES transforms.conf: [default_rosen_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = rosen_cluster REGEX = . SOURCE_KEY = _raw Example: Source: /var/log/dnf.log _time _raw host source index splunk_server count 2024-09-10 12:07:21 2024-09-10T12:07:21+0000 DDEBUG timer: config: 3 ms linuxsplixprd51.roseninspection.net (Indexer ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd51.roseninspection.net 2 2024-09-11 12:24:31 2024-09-11T10:24:31+0000 DDEBUG timer: config: 4 ms linuxsplixprd01.roseninspection.net (Indexer Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-10 13:15:04 2024-09-10T11:15:04+0000 DDEBUG timer: config: 3 ms linuxsplshprd50.roseninspection.net (Search head ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd50.roseninspection.net 2 2024-09-10 13:22:53 2024-09-10T11:22:53+0000 DDEBUG Base command: makecache linuxsplshprd01.roseninspection.net  (Search head Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-11 11:55:51 2024-09-11T09:55:51+0000 DEBUG cachedir: /var/cache/dnf kuluxsplhfprd01.roseninspection.net (Heavy Forwarder) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 Any idea how i can achieve to get rid of those duplicate events at the source cluster (last_chance)?
Hi, Is there any Splunk Technology Add-on (TA) for Dell Unity storage? suggestions pls. I see Dell PowerMax and Dell PowerScale (earlier Dell Isilon) only in Splunkbase.
      | tstats summariesonly=true values(DNS.query) as query FROM datamodel="Network_Resolution_DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_ti... See more...
      | tstats summariesonly=true values(DNS.query) as query FROM datamodel="Network_Resolution_DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_time) as last_time by query | eval gap=(last_time - _time) | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime       Sorry when i have copied i pasted wrong datamodel. This is CIM model, but i duplicate this model and add some additional fields, but for this query i need only field query and time. Original query is from  https://www.splunk.com/en_us/blog/security/detect-hunt-dns-exfiltration.html?locale=en_us
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.googl... See more...
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.google.comwww.google.com  The EUM is reachable.EUM-processor: version-'24.4.0.0', commit-cd:XXXXXXXXXXXXXXb, build-release/24.4.0.next #24.4.0-35342, timestamp=2024-05-02 01:18:33 The backend is Microsoft SharePoint. CSP has added both the CDN and EUM servers. Regards, Khalid
Hi @KhalidAlharthi , when you'll give sufficient disk space, indexers should be automatically aligned, even if some time will be required. You could check the replication status, after some time fr... See more...
Hi @KhalidAlharthi , when you'll give sufficient disk space, indexers should be automatically aligned, even if some time will be required. You could check the replication status, after some time from the Cluster Master. Cluster Master gives you the feature to force rebalancing. Ciao. Giuseppe
On a Linux HF, was seeing this error in 9.0.5.1 and now in 9.2.1 as well.  it worked after disabling the set "persistentQueueSize"  parameter in inputs.conf.  This is the only change we did.  Than... See more...
On a Linux HF, was seeing this error in 9.0.5.1 and now in 9.2.1 as well.  it worked after disabling the set "persistentQueueSize"  parameter in inputs.conf.  This is the only change we did.  Thank you. 
how can i solve it the disk volume high but how can i ensure the data can be aligned is there commands or something to check ?
Hi @KhalidAlharthi , the indexing queue is full, probably because you don't have enough disk space or there are too data for the resources Indexers have. Ciao. Giuseppe
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to reali... See more...
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to realign data and sometimes it's better to perform a rolling restar. Ciao. Giuseppe
Hi @Mark_Heimer , check how you created the drilldown filter, because these are html codes. Ciao. Giuseppe
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) i... See more...
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) is not full yet so it should not create "All Other Traffic"  Unless my BT is full it should not create "All other Traffic" as per AppDynamics Documentation. Then in my case why it is creating "All other Traffic". Kindly let me know in brief regarding my concern. Thanks, Satishkumar
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can ... See more...
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can do more complicated stuff with indexed fields. | tstats values(source) AS source WHERE index=* source !='*log.2024-*' | mvexpand source | <the rest of your evals>  
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true value... See more...
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" [...] Few more hints: 1. Use preformatted paragraph style or a code block to paste SPL - it helps in reability and prevents the forum interface from rendering some text as emojis and such. 2. What do you mean by "doesn't work"? Do you get an error? Or you simply get different results than expected? If so, how they differ? 3. There are two typical approaches to debugging SPL - either build it from the start adding commands one by one until they stop yielding proper results or start with the whole search and remove commands from the end one by one until they start producing proper results - then you know which step is the problematic one. 4. Often it's much easier for people to help you when you provide sample(s) of your data and describe what you want to do with it than posting some (sometimes fairly complicated) SPL without additional comments as to what you want to achieve.
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,so... See more...
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,source,sourcetype,host,index,splunk_server,_raw I do an indexed_extraction=CSV when I import the csv files on another spunk instance SPL-02. If I just import the file, the host will be the instance SPL-02 and I want the host to be SPL-01 I got past this by having a transform as follows: [mysethost] INGEST_EVAL = host=$field:host$   Question 1: That gives me correct host name set to SPL-01, but I still have a EXTRACTED_HOST field, when I look at events in Splunk.. I found the article below where I got the idea to use $field:host$, but it also has ":=" for assignment, that did not work for me, so I used the "=" and then it worked. I also tried setting the "$field:host$=null()" but that had no effect.. I found this article https://community.splunk.com/t5/Getting-Data-In/How-to-get-the-host-value-from-INDEXED-EXTRACTIONS-json/m-p/577392   Question 2: I have problem getting the data from time field in. I tried using the TIMESTAMP_FIELDS in props.conf for this import. I tried the following. TIMESTAMP_FIELDS=_time (Did not work) TIMESTAMP_FIELDS=$field:_time$ ( Did not work) I then renamed the header line so time was named: "xtime" instead and then I could use the props.conf and set the TIMESTAMP_FIELDS=xtime How can I use the _time field directly?  
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files th... See more...
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files themselves. 2. If you want to simulate a live system, it's usually not enough to ingest a batch of events from some earlier-gathered dump since the events will get indexed in the past. For such simulation stuff you usually use event generators like TA_eventgen.
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source"... See more...
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source" becomes something like this: source: Access%20-%20Excessive%20Failed%20Logins%20-%20Rule And no results are shown on the page after page refresh. thanks.
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding t... See more...
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding the connectivity    see the pic below