All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a Fortigate firewall that was configured to send UDP logs, lately, I have configured it to send TCP logs instead of UDP, then I have started to see something wrong with the way the logs are re... See more...
I have a Fortigate firewall that was configured to send UDP logs, lately, I have configured it to send TCP logs instead of UDP, then I have started to see something wrong with the way the logs are received, I have noticed that the logs are being cut in random locations within the single log and continue writing the rest of the log after adding a new line and a time stamp.   The above screenshot is just one example, it happens regularly. I have tried to modify the syslog-ng.conf configuration file, in the options to be specific: keep_timestamp (yes); ---> keep_timestamp (no); log_msg_size(65536); ---> log_msg_size(131072); But the issue still persists! Can anybody please help me with this?
Hi,  is there a list with recommended indexes for Security Essentials? I have to build a PoC in a greenfield deployment and would like to create the indexes in a way that they are also usable in En... See more...
Hi,  is there a list with recommended indexes for Security Essentials? I have to build a PoC in a greenfield deployment and would like to create the indexes in a way that they are also usable in Enterprise Security. thanks Alex
Hey all - I successfully installed a forwarder on my RPI a while back but can't seem to be able to do it again. After unpacking splunkforwarder-9.0.3-dd0128b1f8cd-Linux-armv8.tgz into /opt/, I can't ... See more...
Hey all - I successfully installed a forwarder on my RPI a while back but can't seem to be able to do it again. After unpacking splunkforwarder-9.0.3-dd0128b1f8cd-Linux-armv8.tgz into /opt/, I can't run the splunk binary.  I have also tried the 64bit and s390x, just cuz. I've google extensively and had not luck with some solutions such as creating a symbolic link to a certain file, but the file already exists in the system. I realize this file name says armv8 and armv8 "introduces the 64-bit instruction set", but the downloads page doesn't have armv7. Anyway.... sudo /opt/splunkforwarder/bin/splunk start --accept-license /opt/splunkforwarder/bin/splunk: 1: Syntax error: "(" unexpected uname -a Linux zeek-pi 5.15.61-v7l+ #1579 SMP Fri Aug 26 11:13:03 BST 2022 armv7l GNU/Linux uname -r 5.15.61-v7l+   Any suggestions on how to fix this? Thank you!
Hi All, I am using Splunk 9.0.1 version,  is there a way to Schedule PDF Delivery option in dashboard studio. if yes please tell me how to do it.
I have Splunk setup and it establishes connection with syslog and splunk universal forwarder from a remote server: I have syslog-ng setup as follows:    You can see the connections est... See more...
I have Splunk setup and it establishes connection with syslog and splunk universal forwarder from a remote server: I have syslog-ng setup as follows:    You can see the connections established :   This is the inputs.conf for the splunk universal forwarder:    But still no data is being received by splunk:      I was able to use some powershell script to verify that the logs were being sent and delivered to the server with splunk. The issue is with splunk itself.   Am I missing something? And how would I go about troubleshooting the issue and fixing it?
Why is the data model hierarchical? A data model is a hierarchical structure with a root data set and sub data sets. It's a sudden thought today, but if the data composition of the upper root data ... See more...
Why is the data model hierarchical? A data model is a hierarchical structure with a root data set and sub data sets. It's a sudden thought today, but if the data composition of the upper root data set is changed, the lower data set is also affected. If this happens, wouldn't the lower data depend on the upper data every time and be vulnerable to change? I don't know why the structure is like this. Inheriting the characteristics of the data itself creates a dependency, which then has to be changed every time. For example, when the type of equipment is replaced, the composition of the field changes each time, and the data model must also change each time. (everything below) I don't know if this is efficient. Why are data models designed as hierarchical?
I use a slightly customized Splunkd systemd unit file. When I apply a core upgrade to my Splunk installation, I've found that my unit file has been renamed and replaced with what I can only assume is... See more...
I use a slightly customized Splunkd systemd unit file. When I apply a core upgrade to my Splunk installation, I've found that my unit file has been renamed and replaced with what I can only assume is a default one from Splunk. Does anyone know if this is in a template file somewhere? I'm trying to see if there's a way to change it so it contains my customizations.
Hello!! I'm trying to integrate Akamai with Splunk using the APP: https://splunkbase.splunk.com/app/4310 But when trying to configure, I get the error below: "Encountered the following error whi... See more...
Hello!! I'm trying to integrate Akamai with Splunk using the APP: https://splunkbase.splunk.com/app/4310 But when trying to configure, I get the error below: "Encountered the following error while trying to save: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" Java was installed as required: java -version openjdk version "1.8.0_352" OpenJDK Runtime Environment (build 1.8.0_352-b08) OpenJDK 64-Bit Server VM (build 25.352-b08, mixed mode) The APP was installed on my Splunk Enterprise(HeavyFowarder), the configuration was based on this document: https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector I imported the certificate into the Java path: keytool -importcert -keystore /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.352.b08-2.el7_9.x86_64/jre/lib/security/cacerts -storepass changeit -file certificate.crt Reference - https://stackoverflow.com/questions/21076179/pkix-path-building-failed-and-unable-to-find-valid-certification-path-to-requ?page=2&tab=scoredesc#tab-top Error in Log: "01-13-2023 11:39:54.829 -0300 INFO SpecFiles - Found external scheme definition for stanza="TA-Akamai_SIEM://" from spec file="/opt/splunk/etc/apps/TA-Akamai_SIEM/README /inputs.conf.spec" with parameters="hostname, security_configuration_id_s_, client_token, client_secret, access_token, initial_epoch_time, final_epoch_time, limit, log_level, proxy_host, proxy_port" 01-13-2023 11:39:55.241 -0300 INFO ModularInputs - Introspection setup completed for scheme "TA-Akamai_SIEM". 01-13-2023 11:42:07.678 -0300 WARN ModularInputs - Argument validation for scheme=TA-Akamai_SIEM failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification target path to requested 01-13-2023 11:42:29.337 -0300 WARN ModularInputs - Argument validation for scheme=TA-Akamai_SIEM failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification target path to requested" I don't use proxy... Does anyone have a light? I'm losing hope of doing this integration! Thanks!  
I want to use the dedup command and see which values it removes from a field. Is this possible?
Hello, I have an existing high volume index and have discovered a chunk of event logs within the index that would be a great canidate to convert to metrics.  Can you filter these type of events to ... See more...
Hello, I have an existing high volume index and have discovered a chunk of event logs within the index that would be a great canidate to convert to metrics.  Can you filter these type of events to send to the metrics index and then convert the events to metrics at index time all using props/transforms? I have this props.conf  [my_highvol_sourcetype] TRANSFORMS-routetoIndex = route_to_metrics_index Transforms.conf [route_to_metrics_index] REGEX = cpuUtilization\= DEST_KEY=_MetaData:Index FORMAT = my_metrics_index But now what sourcetype do I use to apply the event log to metrics conversion settings?  Should I filter this dataset to a new sourcetype within my high volume index so I can apply my event log to metrics to all events matching the new sourcetype then filter to the metrics index? Any thoughts would be helpful to see if something like this is possible to do using props/transforms.
I have a significant number of dashboards that use dbxquery to pull data from a significant number of servers running many nosql databases (>20) with standardized collection names(>20). I have databa... See more...
I have a significant number of dashboards that use dbxquery to pull data from a significant number of servers running many nosql databases (>20) with standardized collection names(>20). I have database connections defined for each server/database combination:  I'm currently using a simple dbxquery in search to pull data from these collections:         |dbxquery connection=$server_name$_database_name query ="SELECT * FROM collection_name" |(numerous transformations)         This works fine. Unfortunately, there's a lot of field transformations, json processing, etc. that needs to happen after the query, and its always the standard 8-10 lines. I'd like to standardize these queries and imbed them in a macro. I'd like to bundle all of this in a macro like this:               `collection_name(server_name)`         The problem is that |dbxquery doesn't appear to like being the first command in a macro.          Error in 'dbxquery' command: This command must be the first command of a search. The search job has failed due to an error. You may be able view the job in the Job Inspector.         Any ideas how to implement this macro in a clean way? 
Hello, I am new to splunk. I need to get the top 5 products sold for each day, for the last 7 days. The products could be different each day, as shown in the example below.   Day (X-Axis) ... See more...
Hello, I am new to splunk. I need to get the top 5 products sold for each day, for the last 7 days. The products could be different each day, as shown in the example below.   Day (X-Axis) Top 5 Products (Y-Axis) 1 2 3 4 5 1 P1 PA P4 AC ZX 2 P2 PB P5 AR P1 3 P3  PC PA P5 AC 4 P4 P1 P1 P4 AR 5 P5 PD AB AX AB Is there a way to get it done? I tired the following but it gives me the same 5 products for all days and puts everything else in "OTHER" bucket: [my search] | table _time, Product | timechart count(Product) byProduct WHERE max in top5
Splunk 9.0.0 on Windows servers  So I clicked on Apps \ Enterprise Security and I was greeted with that error App configuration The "Enterprise Security" app has not been fully configured yet. ... See more...
Splunk 9.0.0 on Windows servers  So I clicked on Apps \ Enterprise Security and I was greeted with that error App configuration The "Enterprise Security" app has not been fully configured yet. This app has configuration properties that can be customized for this Splunk instance. Depending on the app, these properties may or may not be required. Unknown search command 'essinstall'. OK
Hi All, I am trying to list all tokens via splunk http-event-collector cli and it retruned error as below: [centos8-1 mycerts]$ ~/splunk/bin/splunk http-event-collector list -uri https://centos8-1:... See more...
Hi All, I am trying to list all tokens via splunk http-event-collector cli and it retruned error as below: [centos8-1 mycerts]$ ~/splunk/bin/splunk http-event-collector list -uri https://centos8-1:8089 ERROR: certificate validation: self signed certificate in certificate chain Cannot connect Splunk server I used openssl to try to connect to my server, it returned code 0. However, if I used the splunk openssl, it will return code 19. And from splunkd.log it said: 01-14-2023 01:25:22.088 +0800 WARN HttpListener [75758 HttpDedicatedIoThread-6] - Socket error from 192.168.30.128:59764 while idling: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. Once I commented out cliVerifyServerName in servers.conf, the cli works but with warning as below: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. May I know if I missed any configuration here? The cert is generated on my own and indeed it is self-signed one.
I have events like below -a3bcd: Info1234x:NullValue -a3bcd: Info1234x:NullValue -b3bcd: Info1234x:NullValue2 -c3bcd: Info1234x:NullValue3   I managed to produce a table like these ErrorInfo  ... See more...
I have events like below -a3bcd: Info1234x:NullValue -a3bcd: Info1234x:NullValue -b3bcd: Info1234x:NullValue2 -c3bcd: Info1234x:NullValue3   I managed to produce a table like these ErrorInfo                                                     Count a3bcd: Info1234x:NullValue               2 -b3bcd: Info1234x:NullValue2           1 -c3bcd: Info1234x:NullValue3           1 I would like to condense those events into one since they are all same kind of error just different paramter so it would be like ErrorInfo                                                     Count Info1234x:                                                   1 Thanks in advance        
Hello,   if I want to send a job in background from a dashboard I have to Open in Search and after that I can perform the choice. I wonder if there is some trick to directly send a dashboard job ... See more...
Hello,   if I want to send a job in background from a dashboard I have to Open in Search and after that I can perform the choice. I wonder if there is some trick to directly send a dashboard job to background. Do you have any hints? Thank you and happy splunking Marco
Hi, I'm trying to onboard NSG Flow Logs and while I have managed to break the events into the specific tuples as per this link [https://answers.splunk.com/answers/714696/process-json-azure-nsg-flow... See more...
Hi, I'm trying to onboard NSG Flow Logs and while I have managed to break the events into the specific tuples as per this link [https://answers.splunk.com/answers/714696/process-json-azure-nsg-flow-log-tuples.html?_ga=2.123284427.1721356178.1673537284-343068763.1657544022] I lose a lot of useful information that I need such as "rule" does anyone have any ideas? { "records": [ { "time": "2017-02-16T22:00:32.8950000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, "flows": [ { "rule": "DefaultRule_DenyAllInBound", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D" ] } ] }, { "rule": "UserRule_default-allow-rdp", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A", "1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A", "1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A", "1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A" ] } ] } ] } }, { "time": "2017-02-16T22:01:32.8960000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, "flows": [ { "rule": "DefaultRule_DenyAllInBound", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D" ] } ] }, { "rule": "UserRule_default-allow-rdp", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A", "1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A", "1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A" ] } ] } ] } }, "records": [ { "time": "2017-02-16T22:00:32.8950000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A","1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A","1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A","1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A"]}]}]} } , { "time": "2017-02-16T22:01:32.8960000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A","1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A","1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A"]}]}]} } , { "time": "2017-02-16T22:02:32.9040000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282492,175.182.69.29,10.1.0.4,28918,5358,T,I,D","1487282505,71.6.216.55,10.1.0.4,8080,8080,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282512,91.224.160.154,10.1.0.4,59046,3389,T,I,A"]}]}]} }
I have the following stats search:     index=servers1 OR index=servers2 DBNAME=DATABASENAME source="/my/log/source/*" | stats list(Tablespace), list("Total MB"), list("Used MB"), list("Free MB"... See more...
I have the following stats search:     index=servers1 OR index=servers2 DBNAME=DATABASENAME source="/my/log/source/*" | stats list(Tablespace), list("Total MB"), list("Used MB"), list("Free MB"), list("Percent Free") by DBNAME     Which is supposed to produce the following results: DBNAME list(Tablespace) list(Total MB) list(Used MB) list(Free MB) list(Percent Free) DATABASENAME RED_DATA 165890 46350 119540 72   BLUE_DATA 2116 1016 1100 52   PINK_DATA 10 0 10 100   PURPLE_DATA 34 17 17 50   GREEN_DATA 6717 0 6717 100   ORANGE_DATA 51323 295 51028 99 …. Cont'd for 20+ rows           About 25-30% of the time (as both a live search and scheduled report) I get the following results instead. The list(Used MB) column will have extra garbage data which appears to be job search components: DBNAME list(Tablespace) list(Total MB) list(Used MB) list(Free MB) list(Percent Free) DATABASENAME RED_DATA 165890 46350 119540 72   BLUE_DATA 2116 1016 1100 52   PINK_DATA 10 0 10 100   PURPLE_DATA 34 17 17 50   GREEN_DATA 6717 0 6717 100   ORANGE_DATA 51323 295 51028 99       ion.command.search           4           duration.command.search.calcfields           0           duration.command.search.fieldalias           0           duration.command.search.filter           0           duration.command.search.index           2           duration.command.search.index.usec_1_8           0           duration.command.search.index.usec_8_64           0           duration.command.search.kv           0           duration.command.search.lookups     …. Cont'd for 20+ rows           I've adjusted the following limits.conf Search Head settings with no luck:     [stats] maxresultrows = 50000 maxvalues = 10000 maxvaluesize = 10000 list_maxsize = 10000   The search inspector also does not produce any notable error messages. Any ideas as to what is happening here and how I can solve it?
Hi, While pushing custom created application from master to search heads, am getting the below, error. Output - [root@ip-172-31-23-159 bin]# ./splunk apply shcluster-bundle -target https:172.31.22.... See more...
Hi, While pushing custom created application from master to search heads, am getting the below, error. Output - [root@ip-172-31-23-159 bin]# ./splunk apply shcluster-bundle -target https:172.31.22.82:8089 Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error=Cannot resolve hostname [root@ip-172-31-23-159 bin]#
I am trying to drilldown within a dashboard. I wish set a token-value with relative_time, using a dynamic relative time specifier input-variable. If I set the relative time specifier to "+1h" it wor... See more...
I am trying to drilldown within a dashboard. I wish set a token-value with relative_time, using a dynamic relative time specifier input-variable. If I set the relative time specifier to "+1h" it works fine: <eval token="endTime_token">relative_time($startTime_token$, "+1h")</eval> But when I use a token with value "1h" it does not: <eval token="endTime_token2">relative_time($startTime_token$, "+$resultion_token$"</eval> I paste my complete code as reference:     <form> <label>Drilldown-lab</label> <fieldset submitButton="false"> <input type="time" token="period_token"> <label>Period</label> <default> <earliest>-24h@h</earliest> <latest>@h</latest> </default> </input> <input type="dropdown" token="resolution_token"> <label>Resolution</label> <choice value="15m">15 minutes</choice> <choice value="1h">1 hour</choice> <default>1h</default> </input> </fieldset> <row> <panel> <title>Overview-panel</title> <table> <search> <query> index="my_index" | bin _time span=$resolution_token$ | eval startTime = strftime(_time, "%Y-%m-%d %H:%M") | stats count by startTime </query> <earliest>$period_token.earliest$</earliest> <latest>$period_token.latest$</latest> </search> <option name="drilldown">row</option> <drilldown> <eval token="startTime_token">strptime($row.startTime$, "%Y-%m-%d %H:%M")</eval> <eval token="endTime_token">relative_time($startTime_token$, "+1h")</eval> <eval token="endTime_token2">relative_time($startTime_token$, "+$resultion_token$"</eval> </drilldown> </table> </panel> </row> <row> <panel depends="$startTime_token$"> <title>Drilldown-panel $endTime_token$, $endTime_token2$</title> <table> <search> <query> index="my_index" | stats avg(responseTimeMs) as Responsetime_avg count by assetId </query> <earliest>$startTime_token$</earliest> <latest>$endTime_token$</latest> </search> </table> </panel> </row> </form>        I am able to calculate the endTime-value in my query (SPL), I would prefer to be able to set it with "eval token".