All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a problem with Splunk ES Glass Tables not loading when setting the requireClientCert=true in sslConfig. Of course I have the complete SSL setup working fine with sslVersions=tls1.2 u... See more...
Hello, I have a problem with Splunk ES Glass Tables not loading when setting the requireClientCert=true in sslConfig. Of course I have the complete SSL setup working fine with sslVersions=tls1.2 using certificates singed by own CA. When trying to access the Glass Tables from ES menu, I get the following error message: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/SplunkEnterpriseSecuritySuite/storage/collections/config/SplunkEnterpriseSecuritySuite_glasstables (Caused by SSLError(SSLError(1, u'[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:742)'),)) p.s note: I have tried to add ssl3 to allowed list in sslVersions just to check if this is the problem but I end up with KVStore failure. However, this is not how I want to solve it. Thank you for your interactivity and responses in advance Regards
I have followed all of Splunk's documentation to be able to use certificates signed by a local Certificate Authority and have tried to set up the SSL configuration in server.conf, inputs.conf, and ou... See more...
I have followed all of Splunk's documentation to be able to use certificates signed by a local Certificate Authority and have tried to set up the SSL configuration in server.conf, inputs.conf, and outputs.conf, but no matter what the connection between Indexer and Forwarders cannot be established because the Indexer actively refuses to allow this connection. For the configuration of SSL in the forwarders I have a custom app that is pushed using the Deployment Server capabilities. The server.conf in the indexer's .../system/local: [sslConfig] sslRootCAPath = /path/to/RootCA.pem sslVersions = tls1.2 The inputs.conf in the indexer's .../system/local: [splunktcp-ssl://9997] connection_host = ip disabled = 0 [SSL] serverCert = /path/to/serverCert.pem requireClientCert = true sslVersions = tls1.2 The server.conf in the forwarder's .../app//local [sslConfig] sslRootCAPath = /path/to/app/RootCA.pem sslVersions = tls1.2 The outputs.conf in the forwarder's .../app//local [tcpout] useSSL = true clientCert = /path/to/app/clientCert.pem useACK = true sslVersions = tls1.2 Essentially, I have three .pem files, RootCA.pem (this one in the server.conf of both Indexer and Forwader), serverCert.pem (this one in the inputs.conf of the Indexer), and clientCert.pem (this one in the outputs.conf in the Forwarder). I want to make sure that communication between Deployment Server and Forwarder does not require certificates as I am trying to install the Forwarders with a script and have it pull the certificates and configuration so it can then communicate with the receiving port (9997). What am I doing wrong? I followed these instructions: https://docs.splunk.com/Documentation/Splunk/7.3.4/Security/HowtoprepareyoursignedcertificatesforSplunk https://docs.splunk.com/Documentation/Splunk/7.3.4/Security/ConfigureSplunkforwardingtousesignedcertificates And I am running Splunk 7.3.4
Hi Dear Splunkers, I am trying to develop a Modular Input for our REST API which will ingest some data from our API through a python script implementation. The idea is simple. The modular input wi... See more...
Hi Dear Splunkers, I am trying to develop a Modular Input for our REST API which will ingest some data from our API through a python script implementation. The idea is simple. The modular input will poll our REST API after some interval, fetch the data, and index it into Splunk. However, I am confused about the concept of single-instance and multiple-instance modular inputs. What I have understood is that single-instance modular inputs can be configured only once by the user and there is only one instance of the python script running at any point. Our API has the same type of data so there is no need for the user to configure multiple inputs otherwise, as the same data will be duplicated and indexed by Splunk which will be wasteful, I believe. Can someone explain to me the major difference between both types in easy terms, and also suggest which type of modular input I should create for my use case? Thanking you all for taking the time to read this. Regards!
We have a requirement to send Splunk processed data as a CSV to a third-party system. Currently the CSV file is sent via email, but we want it to be kept in a shared location (or folder) where Contro... See more...
We have a requirement to send Splunk processed data as a CSV to a third-party system. Currently the CSV file is sent via email, but we want it to be kept in a shared location (or folder) where Control-M (or a similar batch processing system) can move the file for other purposes. My understanding of sending or exporting to a third-party via Splunk is the following: (a) Could send as syslog to external system (but that will be events) (b) Could send as alerts (again events) (c) Could send CSV file but as email attachment Is there an option to do option (c) like an export/dump into an external filesystem? Has anyone tried this or done a custom output export script?
I am looking to find events where IP address changes from previous to current, however using fist(ip) and last(ip) misses the events in between the first and last... Ideally I am looking to f... See more...
I am looking to find events where IP address changes from previous to current, however using fist(ip) and last(ip) misses the events in between the first and last... Ideally I am looking to find when a change occurs for the IP value and then look at the previous IP value... this comparison is then used to find ip geoloc and calc the speed = dist/time with haversine app. Thank you
Hi All, Recently I have noticed that some of the our Saved Searches are failing with the errors like below, "Failed to start search for id="scheduler__abcde__Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438... See more...
Hi All, Recently I have noticed that some of the our Saved Searches are failing with the errors like below, "Failed to start search for id="scheduler__abcde__Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438a1f3bbe5dac6_at_1588593600_88844". Dropping failedtostart token at path=/opt/splunk/var/run/splunk/dispatch/scheduler__abcde_Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438a1f3bbe5dac6_at_1588593600_88844 to expedite dispatch cleanup Could anyone suggest what could be the issue ?
Hello, I was wondering if it would be possible to see what apps are being used, amount of ram and cpu per user? What kind of logs and events do I need to be sending in. I want to be clear this is to ... See more...
Hello, I was wondering if it would be possible to see what apps are being used, amount of ram and cpu per user? What kind of logs and events do I need to be sending in. I want to be clear this is to monitor a server that has a lot of users accessing it and using it.
We're looking to license the Qualys API so we can use this TA. We only run a monthly Qualys scan so I'm unsure if we need the Standard API's 300 calls per hour. How likely is this to work with ... See more...
We're looking to license the Qualys API so we can use this TA. We only run a monthly Qualys scan so I'm unsure if we need the Standard API's 300 calls per hour. How likely is this to work with the Basic API's 50 calls per day limit?
I am having an issue with a dashboard not populating all hosts when I select ALL in the host dropdown. For reference, I have a token $env$ which populates inside the host before $host$ is set. The... See more...
I am having an issue with a dashboard not populating all hosts when I select ALL in the host dropdown. For reference, I have a token $env$ which populates inside the host before $host$ is set. The issue I am having is when I select the dropdown to display ALL hosts in the table below, it shows no results. Here is where I believe it is failing. <eval token="host">if($host$ == "All", "xxx$env$-$service$*", $host$)</eval> When I select ALL from the Hosts drop down I want the graphs to display all hosts from the previously selected dropdowns env and service Is this possible?
On a Raspberry Pi 3 armv7l GNU/Linux, INDEXED_EXTRACTIONS=JSON in the props.conf file results in unrecoverable JSON StreamId processing errors: 05-06-2020 17:52:07.836 +0100 ERROR JsonLineBre... See more...
On a Raspberry Pi 3 armv7l GNU/Linux, INDEXED_EXTRACTIONS=JSON in the props.conf file results in unrecoverable JSON StreamId processing errors: 05-06-2020 17:52:07.836 +0100 ERROR JsonLineBreaker - JSON StreamId:8017092045127549753 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/metrics.log", data_host="rpi3", data_sourcetype="json" 05-06-2020 17:52:07.836 +0100 ERROR JsonLineBreaker - JSON StreamId:8017092045127549753 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/metrics.log", data_host="rpi3", data_sourcetype="json" 05-06-2020 17:52:07.836 +0100 ERROR JsonLineBreaker - JSON StreamId:8017092045127549753 had parsing error:Unexpected character: '5' - data_source="/opt/splunkforwarder/var/log/splunk/metrics.log", data_host="rpi3", data_sourcetype="json" with the log expanding so quickly, it fills up the /opt/splunkforwarder/var/log/splunk/splunkd.log to maximum logrotate capacity. Steps to duplicate bug: Install splunkforwarder-8.0.3-a6754d8441bf-Linux-arm.tgz onto a Raspberry Pi 3. Edit the /opt/splunkforwarder/etc/system/local/props.conf and add the following code: [default] SHOULD_LINEMERGE = false KV_MODE = none INDEXED_EXTRACTIONS=JSON NO_BINARY_CHECK = true TRUNCATE = 0 Add a local JSON file to the splunk file monitor with $SPLUNKHOME/bin/splunk add monitor /var/log/myvalidjsonfile.json -sourcetype json -host myhost -index myindex Restart splunk. Check the file tail -f $SPLUNKHOME/var/log/splunk/splunkd.log Watch it scroll away off the screen! The errors above are reported for both metrics.log and the splunkd.log itself(!) Stop splunk. Edit props.conf again and remove the line INDEXED_EXTRACTIONS=JSON . Restart splunk. Your splunkd.log is back to normal again.
Every time i try to install the universal forwarder on a windows 10 64bit machine it ends prematurely immediately. When i check the event logs i see the Event ID's 1033 (with status code 1603) and 11... See more...
Every time i try to install the universal forwarder on a windows 10 64bit machine it ends prematurely immediately. When i check the event logs i see the Event ID's 1033 (with status code 1603) and 11708 (Product: UniversalForwarder -- Installation failed.).
Query index=java networkenv=prod stackenv=prod source="/opt/jboss/standalone/custom_engine.log" |convert ctime(_time) as time timeformat="%m/%d/%Y %H:%M:%S" |rex field=_raw ""orderEnteredBy":\"... See more...
Query index=java networkenv=prod stackenv=prod source="/opt/jboss/standalone/custom_engine.log" |convert ctime(_time) as time timeformat="%m/%d/%Y %H:%M:%S" |rex field=_raw ""orderEnteredBy":\"(?[^\"]+\")" |table time orderEnteredBy It matches the word orderEnteredBy but when i tried to get the name into the table it shows empty.. Here is the log availableBalance":{},"projectedBalance":{}}},"productSummary":{"symbol":"RPACX","displayValue":"RPACX","cusip":"32254T759","assetType":"MUTUAL_FUND","description":"UFACRESCENT FUND N/L","settlementDuration":4,"omnibusProduct":true},"orderEnteredBy":"JANEDOE","orderUpdatedBy":"SYS_USER","dirtyFSTT":false,"messagesSummary":{"messagesCount":2,"errorsCount":0,"warningsCount":0,"infosCount":0},"assetType":"MUTUAL_FUND","fundServTransactionTypeDetails":{"cdscValueOverridable":false,"netAssetValue":"NAV_OTHER","rightsOfAccumulation":{"additionalHoldings":0,"manualOverride":false,"additionalHoldingsValid":true}
Hello, My objective is to clean three distinct substrings from a comma delimited string. Those substrings may all be present in the string, may not all be present in the string, or may not be prese... See more...
Hello, My objective is to clean three distinct substrings from a comma delimited string. Those substrings may all be present in the string, may not all be present in the string, or may not be present at all in the string. Their positions within the string can vary as well. Assuming values substring1 , substring2 , and substring3 , here are some examples: this,is,substring1,a,sentence,with,one substring2,this,has,substring1,all,three,substring3 here,there,are,no,substrings this,only,substring3,substring1,has,two Ideally I would like to encorporate the logic within a data model, which limits me to eval or rex ( replace isn't possible). So far I can do it with rex mode=sed but I can't add it to a data model. Here is a run anywhere with my sed solution: | makeresults | eval string="this,is,substring1,a,sentence,with,one" | append [ | makeresults | eval string="substring2,this,has,substring1,all,three,substring3" ] | append [ | makeresults | eval string="here,there,are,no,substrings" ] | append [ | makeresults | eval string="this,only,substring3,substring1,has,two" ] | table string | rex mode=sed field=string "s/$/,/g" | rex mode=sed field=string "s/substring1,//g" | rex mode=sed field=string "s/substring2,//g" | rex mode=sed field=string "s/substring3,//g" | rex mode=sed field=string "s/.$//g" The first and last sed commands are to add a comma to the end of the string to manage the case where a substring is positioned at the end, and to remove it again to clean up afterwards. Are there any better solutions? Thanks in advance, and best regards, Andrew
Hi, I wonder if anyone can help. Running a search in Splunk search & reporting I see all the fields as required using the sourcetype, index, source etc. Running the same search in ES (same sear... See more...
Hi, I wonder if anyone can help. Running a search in Splunk search & reporting I see all the fields as required using the sourcetype, index, source etc. Running the same search in ES (same search head), within the search and using the same search, I don't get all the same fields. Example being src_user, src_user_email. The following are true: Splunk TA is on both search heads Permissions on the TA are Global and read is available to all Using the same searches in verbose mode Checked that there are no field aliases etc in the UI This is a Splunk Cloud managed instance Any help would be very much appreciated
Hi, I have a requirement where I have a page say https://www.abc.com/mobile and this page loads various assets like css, js, images etc. In my access logs I get everything like size of the pages an... See more...
Hi, I have a requirement where I have a page say https://www.abc.com/mobile and this page loads various assets like css, js, images etc. In my access logs I get everything like size of the pages and assets. Say my referer is "https://www.abc.com/mobile/monthly" and this page loads 10 assets (js, css, images etc). How do i sum the size of those assets + size of the page itself and put it in a tabular format with 2 columns - Page and total size. I was doing something like below but it's not what I want index=temp sourcetype=access_combined_wcookie referer="https://www.abc.com/mobile/monthly" OR requested_content="/mobile/monthly" | stats values(size) as size count by requested_content Let me know if someone can help. It will be appreciated.
I'm looking for how to change the WildFire configuration to pull reports from the GovCloud API in the Splunk App. Our internal Splunk engineers weren't able to find any ways to change the URL to ... See more...
I'm looking for how to change the WildFire configuration to pull reports from the GovCloud API in the Splunk App. Our internal Splunk engineers weren't able to find any ways to change the URL to wildfire*.gov.*paloaltonetworks.com. Is this something that's possible? Or could this be functionality added in the near future?
I have created a custom python script named "sn_sec_util.py" in the bin folder of the splunk app. I want to load this file in the python REST handler "my_submit.py" which is also in the bin folder. ... See more...
I have created a custom python script named "sn_sec_util.py" in the bin folder of the splunk app. I want to load this file in the python REST handler "my_submit.py" which is also in the bin folder. But I am facing the issue at the import statement mentioning "ModuleNotFoundError: No module named 'sn_sec_util". Here is the file "my_submit.py" with import statement of the module "sn_sec_util.py" in the line 4. import sys import splunk import logging, logging.handlers import sn_sec_util as snutil from urllib.parse import unquote class Submit(splunk.rest.BaseRestHandler): def handle_submit(self): payload = self.request['payload'] sessionKey = self.sessionKey handle_POST = handle_submit
Hello; I've searched a few moment and found an answer to my problem... I'd like to understand why my pdf dashbaords expors show "$field1$" instead of "PE0101" (serveur name) in graph title. I... See more...
Hello; I've searched a few moment and found an answer to my problem... I'd like to understand why my pdf dashbaords expors show "$field1$" instead of "PE0101" (serveur name) in graph title. I found this, here on docs.splunk.com/Documentation/ : => Splunk/6.5.1/Viz/DashboardPDFs#Limitations_to_PDF_generation => Splunk/8.0.3/Viz/DashboardPDFs#Limitations_to_PDF_generation => answers/92805/how-to-change-advance-xml-to-simple-xml.html that it is question of xml format, simple or advanced and variables are not translated into pdf export with advances xml dashboards. Any way i could satisfy myslef but not really. Please see in 1st screenshot that everything is OK in the dashboard called "Backbone PING - Etats des Interfaces" using IHM or pdf export. Please notice in 2nd screenshot that everything is NOT OK in the dashboard called "Backbone PING - Graphes Equipement" using IHM is OK but using pdf export is NOT OK, variables are not set. I use in both dashboards this kind of personnale xml code -form script="custom.js" -set token="timelabel">$label$ I think beeing in advanced xml mode with both dashboards so why the 1st exports variables in pdf and not the 2nd..... ? Thanks for answers.
I have a transaction with mvlist set to true which results in a table where a number of fields display multiple NULL values: Col1 Col2 Col3 12345 NULL 1111 NULL XYZ 2222 NULL NULL 3333 ... See more...
I have a transaction with mvlist set to true which results in a table where a number of fields display multiple NULL values: Col1 Col2 Col3 12345 NULL 1111 NULL XYZ 2222 NULL NULL 3333 Note: this is all 1 row I would like it filtered to the following: Col1 Col2 Col3 12345 XYZ 1111 2222 3333 In splunk docs I read that mvfilter in combination with isnotnull or !isnull functions can be used when you want to return only values that are not NULL from a multivalue field. Neither of these appear to work for me: y=mvfilter(isnotnull(x)) y=mvfilter(!isnull(x)) While this does: y=mvfilter(x!="NULL")) Any ideas on why the former doesn't work? Are there any performance differences between each method?
All of my Splunk alerts and reports are getting quarantined by Microsoft's spam filter, the reason being: "Quarantine reason: Phish". The alerts are simply sent from "splunk" with no email address a... See more...
All of my Splunk alerts and reports are getting quarantined by Microsoft's spam filter, the reason being: "Quarantine reason: Phish". The alerts are simply sent from "splunk" with no email address associated with it, so I can't even add it to "Safe Senders/Recipients".