All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,  I am at an impasse, and I do need some ideas how to overcome that. So here is the challenge.  1)I have CSV data coming into SPLUNK via UF(50+ sources); 2)Data is slightly restructure... See more...
Hi all,  I am at an impasse, and I do need some ideas how to overcome that. So here is the challenge.  1)I have CSV data coming into SPLUNK via UF(50+ sources); 2)Data is slightly restructured (not even always) , and later sent as DB_Output via SPLUNK DBConnect. 3)Data is output either using upsert(for several of the DBs, that need it), or without upsert(for the rest) 4)The automated search(for the DBoutput) is performed periodically (about once/hour, for X-amount of hours back), and it covers our needs regarding newly added data into previously added data sources. HOWEVER, and here is the main challenge, 5)When a new source is added, the CSV source files are already full of old data( older than X amount of hours, some times months) , thus though the data is indexed, it does not get DB-outputted(great word). However I have to send the old data as well. 6) Automated search for "all time" all the time is not applicable( 50+ sources, with often 10+ source files each) , the amount of "select"s sent to the target DB is too much and causes too much time for splunk to search plus the target DB is overwhelmed and stops responding. At this moment, I have a workaround solution , that involves modifying the automated output to search "All time" for that particular source, for 1 iteration, and then I change it back to normal, but, the sources keep increasing their numbers, and this solution is more and more inconvenient. I have a feeling that there is a better solution(easy solution) but it eludes me. If anyone could share ideas, that would be helpful. Have a great day!    
Hello All, We have issue wherein JSON files are not coming in intermittently into Splunk from a SQS based S3 input. The JSON file id generated everyday at 3 AM which is generally ingested into Sp... See more...
Hello All, We have issue wherein JSON files are not coming in intermittently into Splunk from a SQS based S3 input. The JSON file id generated everyday at 3 AM which is generally ingested into Splunk except for times when eventhough the file gets generated, but the file is not ingested on Splunk. Unfortunately, there is no logs whatsoever generated on Splunk internal logs for times when the file is not ingested into Splunk. Can anyone suggest what can be the issue here or if this has been encountered by someone? The version of aws add-on being used here is, 5.0.4 and the Splunk version is 8.1.5.
I'm trying to create a dashboard to find the old version and new version of splunk from the logs  but unable to find it.
Seeing different results when performing similiar searches and not sure on the reason.  base search is the same for both    |timechart span=5m count(eval(if(event=="Started",total,0))) as "star... See more...
Seeing different results when performing similiar searches and not sure on the reason.  base search is the same for both    |timechart span=5m count(eval(if(event=="Started",total,0))) as "started, count (eval(if(event =="Completed",total,0))) as "completed" |eval divergence = completed-started second search is |timechart span=5m count(eval(event=="Started")) as "started, count (eval(event =="Completed")) as "completed" |eval divergence = completed-started    they both produce same results but reversed: first query  time started completed divergence time 18499 18517 18 time 18426 18422 -4   second query time started completed divergence time 18517 18499 -18 time 18422 18426 4   any help will be appreciated  
Hi, is there any other work around to integrate Tripwire Enterprise to Splunk>Cloud? Is there any third party app for this one? Thank you in advance. 
Hi, I have the below output : 1/16/2023 7:51:43 AM 1EE8 PACKET 000001D9C25E6180 UDP Rcv 10.8.64.132 646b Q [0001 D NOERROR] A (6)framer(3)com(0) UDP question info at 000001D9C25E6180 Socket = 940... See more...
Hi, I have the below output : 1/16/2023 7:51:43 AM 1EE8 PACKET 000001D9C25E6180 UDP Rcv 10.8.64.132 646b Q [0001 D NOERROR] A (6)framer(3)com(0) UDP question info at 000001D9C25E6180 Socket = 940 Remote addr 10.8.64.132, port 55646 Time Query=9030678, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x001c (28) Message: XID 0x646b Flags 0x0100 The desired output name=framer.com IP=10.8.64.132 I using regex:  sourcetype=DNSlog |rex field=_raw "NOERROR]\W+(?P<name>.*)\sUDP \S.*\s Socket.*\s Remote addr\W+(?P<IP>.*)," | rex mode=sed field=name "s/[\d;()]+//g" |stats count by name IP   My below code isn't working, can you please help me?
Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 77 1 44 89 Messge 1 1 3 5 15 Messge 2 11 0 4 23 Messge 3 1 0 0 0 Messge 4 9... See more...
Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 77 1 44 89 Messge 1 1 3 5 15 Messge 2 11 0 4 23 Messge 3 1 0 0 0 Messge 4 9 5 0 0 Messge 5 1 1 0 0 Messge 6 1 1 0 0 Messge 7 0 1 0 0     i want to color the cells based on range according to the rows. for eg: In Messge 0 row , color the cells acc to if value >75 then green,  if value <10 then red Messge 0 77 1 44 89 In Messge 1 row,  color the cells acc to if value >10 then green,  if value <5 then red Messge 1 1 3 5 15
Hello, I have the following query in one of the panels in my dashboard.       | mstats p95(prometheus.container_memory_working_set_bytes) as p95_memory_bytes span=1m where pod=sf-mcdata--hy... See more...
Hello, I have the following query in one of the panels in my dashboard.       | mstats p95(prometheus.container_memory_working_set_bytes) as p95_memory_bytes span=1m where pod=sf-mcdata--hydration-worker* AND stack=$stackLower$ by stack | stats min(p95_memory_bytes) as min_p95_memory_bytes by _time | timechart span=1m count as Availability | eval Span=1 | stats sum(Availability) as totalAvailability, sum(Span) as totalSpans | eval AvailabilityPercent = 100*(totalAvailability/ totalSpans) | fields AvailabilityPercent       There are some stacks that return too many events for this metric and this causes a timeout and then the search fails. Is there a way to optimize this query to work with a lot of events?
Lately, I have been reading the Splunk Validated Architectures (SVAs) [found here]  The document states in the Introduction that there is a tool called "Interactive Splunk Validated Architecture (i... See more...
Lately, I have been reading the Splunk Validated Architectures (SVAs) [found here]  The document states in the Introduction that there is a tool called "Interactive Splunk Validated Architecture (iSVA)", and in there is a link to it (https://sva.splunk.com), but this link redirects me to the pdf itself!  I have searched for the tool on google but with no luck, now I'm just wondering what happened to this tool and where I can find it. I really appreciate any help anyone can provide. Edit: Is there any newer version of the document? because the current one is from January 2021.
Dear all, We set a few alerts and send to one receipt (DL) and alert work fine now. we want to send the alert to different receipt for environment-A and environment-B. the alert search code and ot... See more...
Dear all, We set a few alerts and send to one receipt (DL) and alert work fine now. we want to send the alert to different receipt for environment-A and environment-B. the alert search code and other setup are same for 2 environments. Is there any solution for this?  
Hello, I have a query in which I display some value over time in a chart and I want to create an alert that will be triggered when this value is over some threshold for more then 10 minutes straight... See more...
Hello, I have a query in which I display some value over time in a chart and I want to create an alert that will be triggered when this value is over some threshold for more then 10 minutes straight. How would I perform this alert?
I have a Fortigate firewall that was configured to send UDP logs, lately, I have configured it to send TCP logs instead of UDP, then I have started to see something wrong with the way the logs are re... See more...
I have a Fortigate firewall that was configured to send UDP logs, lately, I have configured it to send TCP logs instead of UDP, then I have started to see something wrong with the way the logs are received, I have noticed that the logs are being cut in random locations within the single log and continue writing the rest of the log after adding a new line and a time stamp.   The above screenshot is just one example, it happens regularly. I have tried to modify the syslog-ng.conf configuration file, in the options to be specific: keep_timestamp (yes); ---> keep_timestamp (no); log_msg_size(65536); ---> log_msg_size(131072); But the issue still persists! Can anybody please help me with this?
Hi,  is there a list with recommended indexes for Security Essentials? I have to build a PoC in a greenfield deployment and would like to create the indexes in a way that they are also usable in En... See more...
Hi,  is there a list with recommended indexes for Security Essentials? I have to build a PoC in a greenfield deployment and would like to create the indexes in a way that they are also usable in Enterprise Security. thanks Alex
Hey all - I successfully installed a forwarder on my RPI a while back but can't seem to be able to do it again. After unpacking splunkforwarder-9.0.3-dd0128b1f8cd-Linux-armv8.tgz into /opt/, I can't ... See more...
Hey all - I successfully installed a forwarder on my RPI a while back but can't seem to be able to do it again. After unpacking splunkforwarder-9.0.3-dd0128b1f8cd-Linux-armv8.tgz into /opt/, I can't run the splunk binary.  I have also tried the 64bit and s390x, just cuz. I've google extensively and had not luck with some solutions such as creating a symbolic link to a certain file, but the file already exists in the system. I realize this file name says armv8 and armv8 "introduces the 64-bit instruction set", but the downloads page doesn't have armv7. Anyway.... sudo /opt/splunkforwarder/bin/splunk start --accept-license /opt/splunkforwarder/bin/splunk: 1: Syntax error: "(" unexpected uname -a Linux zeek-pi 5.15.61-v7l+ #1579 SMP Fri Aug 26 11:13:03 BST 2022 armv7l GNU/Linux uname -r 5.15.61-v7l+   Any suggestions on how to fix this? Thank you!
Hi All, I am using Splunk 9.0.1 version,  is there a way to Schedule PDF Delivery option in dashboard studio. if yes please tell me how to do it.
I have Splunk setup and it establishes connection with syslog and splunk universal forwarder from a remote server: I have syslog-ng setup as follows:    You can see the connections est... See more...
I have Splunk setup and it establishes connection with syslog and splunk universal forwarder from a remote server: I have syslog-ng setup as follows:    You can see the connections established :   This is the inputs.conf for the splunk universal forwarder:    But still no data is being received by splunk:      I was able to use some powershell script to verify that the logs were being sent and delivered to the server with splunk. The issue is with splunk itself.   Am I missing something? And how would I go about troubleshooting the issue and fixing it?
Why is the data model hierarchical? A data model is a hierarchical structure with a root data set and sub data sets. It's a sudden thought today, but if the data composition of the upper root data ... See more...
Why is the data model hierarchical? A data model is a hierarchical structure with a root data set and sub data sets. It's a sudden thought today, but if the data composition of the upper root data set is changed, the lower data set is also affected. If this happens, wouldn't the lower data depend on the upper data every time and be vulnerable to change? I don't know why the structure is like this. Inheriting the characteristics of the data itself creates a dependency, which then has to be changed every time. For example, when the type of equipment is replaced, the composition of the field changes each time, and the data model must also change each time. (everything below) I don't know if this is efficient. Why are data models designed as hierarchical?
I use a slightly customized Splunkd systemd unit file. When I apply a core upgrade to my Splunk installation, I've found that my unit file has been renamed and replaced with what I can only assume is... See more...
I use a slightly customized Splunkd systemd unit file. When I apply a core upgrade to my Splunk installation, I've found that my unit file has been renamed and replaced with what I can only assume is a default one from Splunk. Does anyone know if this is in a template file somewhere? I'm trying to see if there's a way to change it so it contains my customizations.
Hello!! I'm trying to integrate Akamai with Splunk using the APP: https://splunkbase.splunk.com/app/4310 But when trying to configure, I get the error below: "Encountered the following error whi... See more...
Hello!! I'm trying to integrate Akamai with Splunk using the APP: https://splunkbase.splunk.com/app/4310 But when trying to configure, I get the error below: "Encountered the following error while trying to save: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" Java was installed as required: java -version openjdk version "1.8.0_352" OpenJDK Runtime Environment (build 1.8.0_352-b08) OpenJDK 64-Bit Server VM (build 25.352-b08, mixed mode) The APP was installed on my Splunk Enterprise(HeavyFowarder), the configuration was based on this document: https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector I imported the certificate into the Java path: keytool -importcert -keystore /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.352.b08-2.el7_9.x86_64/jre/lib/security/cacerts -storepass changeit -file certificate.crt Reference - https://stackoverflow.com/questions/21076179/pkix-path-building-failed-and-unable-to-find-valid-certification-path-to-requ?page=2&tab=scoredesc#tab-top Error in Log: "01-13-2023 11:39:54.829 -0300 INFO SpecFiles - Found external scheme definition for stanza="TA-Akamai_SIEM://" from spec file="/opt/splunk/etc/apps/TA-Akamai_SIEM/README /inputs.conf.spec" with parameters="hostname, security_configuration_id_s_, client_token, client_secret, access_token, initial_epoch_time, final_epoch_time, limit, log_level, proxy_host, proxy_port" 01-13-2023 11:39:55.241 -0300 INFO ModularInputs - Introspection setup completed for scheme "TA-Akamai_SIEM". 01-13-2023 11:42:07.678 -0300 WARN ModularInputs - Argument validation for scheme=TA-Akamai_SIEM failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification target path to requested 01-13-2023 11:42:29.337 -0300 WARN ModularInputs - Argument validation for scheme=TA-Akamai_SIEM failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification target path to requested" I don't use proxy... Does anyone have a light? I'm losing hope of doing this integration! Thanks!  
I want to use the dedup command and see which values it removes from a field. Is this possible?