All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, I have to configure an Enterprise Security and one of the sources is FireEye. I found in Splunkbase an Add-on CIM 4.x compliant that seems to be the correct one but I found that it was a... See more...
Hi at all, I have to configure an Enterprise Security and one of the sources is FireEye. I found in Splunkbase an Add-on CIM 4.x compliant that seems to be the correct one but I found that it was archived! Does anyone know why it's archived and if I can use it (or it's better to choose another one) on Splunk 8.1.1 and ES 4.6.1? Thanks. Ciao. Giuseppe
I would like to continue my Splunk Training but my Splunk trail is over and I am trying to buy Splunk Light or Enterprise. I have called the sales department over 10 times just to buy the service but... See more...
I would like to continue my Splunk Training but my Splunk trail is over and I am trying to buy Splunk Light or Enterprise. I have called the sales department over 10 times just to buy the service but no one seems to want to sale it to me so I can complete my training. Someone please help also why can't we just buy it for the website. I want to change my current career but for some reason no one wants to help me just buy it so I can use it. 
We have Splunk DB Connect 3.4.1 working with out SQL Server databases.  We also have an Oracle database that works over 1521 (non-ssl), but are trying to get it to work over SSL.  We've set up everyt... See more...
We have Splunk DB Connect 3.4.1 working with out SQL Server databases.  We also have an Oracle database that works over 1521 (non-ssl), but are trying to get it to work over SSL.  We've set up everything according to instructions bet we're getting this error message: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate) So it looks like there's a mismatch in cipher suites between DB Connect and the oracle database.  How can I find out which cipher suites DB Connect is trying to use?  we're using ojdbc8  for Oracle 19
Hello there, We have CVE lookup App installed in our environment. We are able to see CVE data from nvd page in our Splunk via CVE-lookup but only till 2020.  CVE-2021 data is not getting fetched. W... See more...
Hello there, We have CVE lookup App installed in our environment. We are able to see CVE data from nvd page in our Splunk via CVE-lookup but only till 2020.  CVE-2021 data is not getting fetched. When checked over splunkbase we saw below notes: https://splunkbase.splunk.com/app/4540/ "Now supports latest nvd 1.1 json feed Supports year 2018,2019 and 2020 Removed vendor/product specific information because of the updated nvd 1.1 feed Contains scripted input which may affect deployment to distributed environments" Any suggestions please on how to resolve this. Meanwhile I have already dropped email to Developer of this app.
Hi Everyone, I have one requirement. I am working on Usage Dashboard. I am showing the Dashboard Name with count. I am getting the count for each and every Dashboard . I want to show the count as ... See more...
Hi Everyone, I have one requirement. I am working on Usage Dashboard. I am showing the Dashboard Name with count. I am getting the count for each and every Dashboard . I want to show the count as 0 which is not visited by any of the user. Below is my query.  index="_internal" sourcetype=splunkd_ui_access file IN ( Data_Management ELF_API ELF_ApexCallout ELF_ApexExecution ELF_ApexSoap ELF_ApexTrigger ELF_AsyncReportRun ELF_BulkApi ELF_Console ELF_ContentDocumentLink ) | where isnull(uri_query) AND user!="-" | stats count by file|table file count|sort -count Can someone please guide me on that.
I am a Splunk newbie and need to be able to search for files with multiple extensions (example: filename.ps1.doc)  and am not sure how to query this...Has anyone run across how I would go about this?... See more...
I am a Splunk newbie and need to be able to search for files with multiple extensions (example: filename.ps1.doc)  and am not sure how to query this...Has anyone run across how I would go about this?   Thanks in advance for any assistance!
Hello, I hope someone could help me out figuring out this one out. The core of what I am trying to do is get a list of all event codes in an index and source sorted on source to understand what is s... See more...
Hello, I hope someone could help me out figuring out this one out. The core of what I am trying to do is get a list of all event codes in an index and source sorted on source to understand what is sending information if I am missing anything.  index=acg_eis_auth EventCode=* | dedup EventCode | fields EventCode | stats count by EventCode
Hi all, I was hoping someone might be able to help me on how to set this: I'm consuming logs from Fortigate via Syslog (log sample below for reference), but this pattern doesn't work for event brea... See more...
Hi all, I was hoping someone might be able to help me on how to set this: I'm consuming logs from Fortigate via Syslog (log sample below for reference), but this pattern doesn't work for event breaks. 601 <189>date=2021-01-29 time=01:13:54 devname="xxxSEC" devid="FG101" logid="0101037141" type="event" subtype="vpn" level="notice" vd="root" eventtime=1611893634651699002 tz="-0300" logdesc="IPsec tunnel statistics" msg="IPsec tunnel statistics" action="tunnel-stats" remip=xxx.xxx.xxx.xxx locip=xxx.xxx.xxx.xxx remport=500 locport=500 outintf="wan1" cookies="xxx" user="N/A" group="N/A" xauthuser="N/A" xauthgroup="N/A" assignip=N/A vpntunnel="VPN-xxx" tunnelip=N/A tunnelid=0 tunneltype="ipsec" duration=4823936 sentbyte=0 rcvdbyte=120 nextstat=600607 <189>date=2021-01-29 time=01:13:54 devname="xxxSEC" devid="FG101" logid="0101037141" type="event" subtype="vpn" level="notice" vd="root" eventtime=1611893634651791593 tz="-0300" logdesc="IPsec tunnel statistics" msg="IPsec tunnel statistics" action="tunnel-stats" remip=xxx.xxx.xxx.xxx locip=xxx.xxx.xxx.xxx remport=500 locport=500 outintf="wan1" cookies="xxx" user="N/A" group="N/A" xauthuser="N/A" xauthgroup="N/A" assignip=N/A vpntunnel="Soc_xxx" tunnelip=N/A tunnelid=0 tunneltype="ipsec" duration=4823936 sentbyte=13785 rcvdbyte=368 nextstat=600597 601 <189>date=2021-01-29 time=01:13:54 devname="xxxSEC" devid="FG101" logid="0101037141" type="event" subtype="vpn" level="notice" vd="root" eventtime=1611893634651699002 tz="-0300" logdesc="IPsec tunnel statistics" msg="IPsec tunnel statistics" action="tunnel-stats" remip=xxx.xxx.xxx.xxx locip=xxx.xxx.xxx.xxx remport=500 locport=500 outintf="wan1" cookies="xxx" user="N/A" group="N/A" xauthuser="N/A" xauthgroup="N/A" assignip=N/A vpntunnel="VPN-xxx" tunnelip=N/A tunnelid=0 tunneltype="ipsec" duration=4823936 sentbyte=0 rcvdbyte=120 nextstat=600607 <189>date=2021-01-29 time=01:13:54 devname="xxxSEC" devid="FG101" logid="0101037141" type="event" subtype="vpn" level="notice" vd="root" eventtime=1611893634651791593 tz="-0300" logdesc="IPsec tunnel statistics" msg="IPsec tunnel statistics" action="tunnel-stats" remip=xxx.xxx.xxx.xxx locip=xxx.xxx.xxx.xxx remport=500 locport=500 outintf="wan1" cookies="xxx" user="N/A" group="N/A" xauthuser="N/A" xauthgroup="N/A" assignip=N/A vpntunnel="Soc_xxx" tunnelip=N/A tunnelid=0 tunneltype="ipsec" duration=4823936 sentbyte=13785 rcvdbyte=368 nextstat=600597 As you can see from the sample above, each event starts with <189> and ends with nextstat=xxx The rest of the patterns seem to work fine: 285 <190>date=2021-01-29 time=05:01:06 devname="xxxSEC" devid="FG101" logid="0100020027" type="event" subtype="system" level="information" vd="root" eventtime=1611907266834995102 tz="-0300" logdesc="Outdated report files deleted" msg="Delete 1 old report files"279 <190>date=2021-01-29 time=05:01:06 devname="xxxSEC" devid="FG101" logid="0100020027" type="event" subtype="system" level="information" vd="root" eventtime=1611907266835276157 tz="-0300" logdesc="Outdated report files deleted" msg="Delete 4 old report files" 615 <189>date=2021-01-29 time=05:00:45 devname="xxxSEC" devid="FG101" logid="0100040704" type="event" subtype="system" level="notice" vd="root" eventtime=1611907246008443458 tz="-0300" logdesc="System performance statistics" action="perf-stats" cpu=0 mem=39 totalsession=48 disk=1 bandwidth="22/7595" setuprate=1 disklograte=0 fazlograte=0 freediskstorage=426665 sysuptime=4837848 waninfo="name=wan1,bytes=1709688/171114431,packets=7001/1473776;name=wan2,bytes=15678/404589561,packets=167/5559353;" msg="Performance statistics: average CPU: 0, memory: 39, concurrent sessions: 48, setup-rate: 1" 289 <190>date=2021-01-29 time=04:58:45 devname="xxxSEC" devid="FG101" logid="0100026003" type="event" subtype="system" level="information" vd="root" eventtime=1611907125775362364 tz="-0300" logdesc="DHCP statistics" interface="mgmt" total=101 used=0 msg="DHCP statistics" Any help would be greatly appreciated
Hey Splunkers! Is it possible to give a Splunk user access to ONLY one dashboard in Splunk and nothing else, i.e., no searching, no reports, access to nothing but only that one dashboard?  If so, wh... See more...
Hey Splunkers! Is it possible to give a Splunk user access to ONLY one dashboard in Splunk and nothing else, i.e., no searching, no reports, access to nothing but only that one dashboard?  If so, what's the best method to deploy access like this?  Any suggestions are greatly welcomed.
I am facing a weird issue at the moment where I want to set up multiple tcp-ssl inputs and have each input using a different certificate. The reason for that is that our Heavy Forwarders will be rec... See more...
I am facing a weird issue at the moment where I want to set up multiple tcp-ssl inputs and have each input using a different certificate. The reason for that is that our Heavy Forwarders will be receiving syslog inputs through two separate load-balancers which will not be performing certificate offloading.  My inputs.conf is as follows.   [tcp-ssl:10515] sourcetype = source1 index = index1 disabled = 0 serverCert = /path to servercert2 sslRootCAPath = /path to rootCA cert [tcp-ssl:10516] sourcetype = source2 index = index2 disabled = 0 [tcp-ssl:10517] sourcetype = source3 index = index3 disabled = 0 [SSL] requireClientCert= false serverCert = /path to servercert1 sslRootCAPath = /path to rootCA cert     Basically I am setting the main certificate that will be used in the [SSL] stanza and then I am overriding that specifically for the [tcp-ssl:10515] stanza. Passwords for both certificates are under the correct stanzas in the local directory. I've also tried to override the certificate in [tcp-ssl:10515] by adding the paths under the local  directory but no luck. No matter what I do Splunk is serving the certificate under the [SSL] stanza (which I have confirmed by capturing and inspecting the packets).    According to Splunk docs what I'm trying should be possible unless I'm misunderstanding something.   [tcp-ssl:<port>] * Use this stanza type if you are receiving encrypted, unparsed data from a forwarder or third-party system. * Set <port> to the port on which the forwarder/third-party system is sending unparsed, encrypted data. * To create multiple SSL inputs, you can add the following attributes to each [tcp-ssl:<port>] input stanza. If you do not configure a certificate in the port, the certificate information is pulled from the default [SSL] stanza: * serverCert = <path_to_cert> * sslRootCAPath = <path_to_cert> This attribute should only be added if you have not configured your sslRootPath in server.conf. * sslPassword = <password>     I've also tried to completely ignore the [SSL] stanza and just add the certificate paths under each input's stanza but I get an error that the inputs cannot start due to the [SSL] stanza not being defined.   Any ideas?
Hi,   i have data name binary keynumber Steve 1100 12345 Steve 100 13246 Steve   12347 Charles   23456   I am trying to count the whether the position in the binary f... See more...
Hi,   i have data name binary keynumber Steve 1100 12345 Steve 100 13246 Steve   12347 Charles   23456   I am trying to count the whether the position in the binary from right to left has a 1 in position 3 and 4 and as a percentage of the number of events. eg result name events 4thbinary 3rdbinary %4th %3rd Steve 3 1 2 33 66   i was trying to get the 4 position first as this will give me the Names with a binary entry, i then thought i could join and run a subsearch to get the all the events, i then wold do an appendcols to get entries with a 1 in the 3rd binary position in the string. index=summary sourcetype=prod source=service binary NOT NULL  |eval red=substr(binary, -4, 1) |stats count(red) AS red by name | join type=left name [search index=summary sourcetype=prod source=service | dedup name keynumber |stats count(keynumber) AS Events by name] |appendcols [search index=summary sourcetype=prod source=service binary NOT NULL  |eval blue=substr(binary, -3, 1) |stats count(blue) AS blue by name] |table name events red blue    however i cannot get my events to equal the correct value, it only returns a value if the binary field is populated. i have looked at map and field but could also not get these to work.
Hi Splunkers ,   Our Architectures  has 3 universal forwarders running in cluster . There is a load balancer running in front of UF where all the source  send logs via load balancer. Load balancer ... See more...
Hi Splunkers ,   Our Architectures  has 3 universal forwarders running in cluster . There is a load balancer running in front of UF where all the source  send logs via load balancer. Load balancer distributes data to  any of the of the 3 UF's based on load.   We notice that that UF is not able to read all the sub directories.  Please help in troubleshooting. Thanks in Advance
Hi all! I have a problem with the time my logs arrive. There is an hour difference. how can I solve that? If I have data from different clients with different timezone on the same server, how can I ... See more...
Hi all! I have a problem with the time my logs arrive. There is an hour difference. how can I solve that? If I have data from different clients with different timezone on the same server, how can I align them?   Thanks a lot!
Hi, I have 2 heavy forwarders set up; F1 is forwarding to F2, and F2 forwards to splunk cloud. On F1 i have set up a local input to listening on UDP:514 for events, this works great and forwards ... See more...
Hi, I have 2 heavy forwarders set up; F1 is forwarding to F2, and F2 forwards to splunk cloud. On F1 i have set up a local input to listening on UDP:514 for events, this works great and forwards to cloud. On F2 i have set up a local input for UDP:514 exactly like i did on F1, but no events are forwarded, does anyone here have a clue to what could be wrong? The events are of the same type, so as long as this works on F1 it should not be an issue with interpreting/reading the events. I have checked the FW and the events are beeing received, and also after setting UDP processor log level to debug i get this in my splunkd.log on F2:   02-01-2021 12:54:00.520 +0100 DEBUG UDPInputProcessor - callback() 02-01-2021 12:54:10.512 +0100 DEBUG UDPInputProcessor - callback() 02-01-2021 12:54:18.502 +0100 INFO TcpOutputProc - Found currently active indexer. Connected to idx=ForwarderIP:30132, reuse=1. 02-01-2021 12:54:20.467 +0100 DEBUG UDPInputProcessor - Generating UDP metrics 02-01-2021 12:54:20.467 +0100 DEBUG UDPInputProcessor - callback() 02-01-2021 12:54:30.514 +0100 DEBUG UDPInputProcessor - callback() 02-01-2021 12:54:34.790 +0100 DEBUG UDPInputProcessor - event=data from="PC100.Local (new)" status=accepted 02-01-2021 12:54:34.790 +0100 DEBUG UDPInputProcessor - UDPInputProcessor::when_events called 02-01-2021 12:54:34.801 +0100 DEBUG UDPInputProcessor - event=data from=PC100.Local status=accepted 02-01-2021 12:54:34.801 +0100 DEBUG UDPInputProcessor - UDPInputProcessor::when_events called 02-01-2021 12:54:34.812 +0100 DEBUG UDPInputProcessor - event=data from=PC100.Local status=accepted 02-01-2021 12:54:34.812 +0100 DEBUG UDPInputProcessor - UDPInputProcessor::when_events called 02-01-2021 12:54:34.830 +0100 DEBUG UDPInputProcessor - event=data from=PC100.Local status=accepted 02-01-2021 12:54:34.831 +0100 DEBUG UDPInputProcessor - UDPInputProcessor::when_events called 02-01-2021 12:54:44.829 +0100 DEBUG UDPInputProcessor - callback() 02-01-2021 12:54:44.829 +0100 DEBUG UDPInputProcessor - event=sendDoneKey source=PC100.Local localport=514 02-01-2021 12:54:44.829 +0100 DEBUG UDPInputProcessor - event=deleteSource source=PC100.Local localport=514 02-01-2021 12:54:48.413 +0100 INFO TcpOutputProc - Found currently active indexer. Connected to idx=ForwarderIP:30132, reuse=1. 02-01-2021 12:54:50.471 +0100 DEBUG UDPInputProcessor - Generating UDP metrics 02-01-2021 12:54:50.471 +0100 DEBUG UDPInputProcessor - callback()    I have had to replace some hostnames as you probably can see. Hopefully someone here can help me figure this out.
Hello Team, As far as I know, forwarder must forward logs to indexer every 30 seconds. I've reinstalled system and trying to configure it. I opened 9997 port on indexer for receiving, and did ./sp... See more...
Hello Team, As far as I know, forwarder must forward logs to indexer every 30 seconds. I've reinstalled system and trying to configure it. I opened 9997 port on indexer for receiving, and did ./splunk add forward-server ip and ./splunk add monitor /var/log Logs collecting, it's alright, but not every 30 seconds, no errors in logs what can cause this problem?
I'm having som issues with how Splunk is handling event breaking for CSV files. A sample of the CSV file in question: USER;SYSTEMINFO;DATE;RT;VT;MESSAGE USER1;Windows 10 Enterprise;20201128 06:05:2... See more...
I'm having som issues with how Splunk is handling event breaking for CSV files. A sample of the CSV file in question: USER;SYSTEMINFO;DATE;RT;VT;MESSAGE USER1;Windows 10 Enterprise;20201128 06:05:27,862;5;0;"File not accessible ("90s"). Error: "Try again"." USER2;Windows 10 Enterprise;20201128 06:05:29,288;15.9;5;"" USER3;Windows 10 Enterprise;20201128 06:15:25,463;5;0;"File not accessible ("90s"). Error: "Try again"." USER4;Windows 10 Enterprise;20201128 06:15:26,830;21.3;0;""   In props.conf i have tried the following (this is a single instance installation of splunk with no additional forwarder and the input is local in this instance of Splunk): [MyCSV] FIELD_DELIMITER = ; FIELD_QUOTE = " HEADER_FIELD_DELIMITER = ; HEADER_FIELD_QUOTE = None SHOULD_LINEMERGE = false KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = CSV input disabled = false pulldown_type = true   This works perfect in the cases where MESSAGE contains two double quotes. in the cases (like the example i provided) where the MESSAGE field contains multiple double quotes Splunk can't seem to break the event properly. One event would end up like this: USER1;Windows 10 Enterprise;20201128 06:05:27,862;5;0;"File not accessible ("90s"). Error: "Try again"." USER2;Windows 10 Enterprise;20201128 06:05:29,288;15.9;5;"" USER3;Windows 10 Enterprise;20201128 06:15:25,463;5;0;"File not accessible ("90s"). Error: "Try again"." But i would expect that to be broken into three separate events. From what i can gather Splunk has issues with the multiple double quotation marks and completely ignores the line_breaker regex. If i change FIELD_QUOTE = " and instead put FIELD_QUOTE = None it does seem to break the events like it should, but the auto-extracted MESSAGE field will then contain quotation marks. With FIELD_QUOTE=" Splunk removes the double quotes around the event but fails to break the event properly. I have also tried to change the double quotes inside the outer double quotes with SEDCMD like this: SEDCMD-removeinnerquotes = s/(?<!;)"(?![\r\n]|$)/'/g This is working so the indexed MESSAGE field is changed from:  "File not accessible ("90s"). Error: "Try again"." to: "File not accessible ('90s'). Error: 'Try again'." However: Splunk is still auto extracting the MESSAGE field BEFORE my SEDCMD kicks in. So one event will now look like this: USER1;Windows 10 Enterprise;20201128 06:05:27,862;5;0;"File not accessible ('90s'). Error: 'Try again'." USER2;Windows 10 Enterprise;20201128 06:05:29,288;15.9;5;"" USER3;Windows 10 Enterprise;20201128 06:15:25,463;5;0;"File not accessible ('90s'). Error: 'Try again'."   And the MESSAGE field in the left side meny under "Interesting fields" will have the value like this: File not accessible ("90s"). Error: "Try again".   So because this is not working i tried to remove all double quotes with SEDCMD like this: SEDCMD-removequotes = s/(?<!;)"(?![\r\n]|$)/'/g s/"//g FIELD_QUOTE=None   Causing event breaking to be done correctly (because of FIELD_QUOTE = None i suspect) and one event will now be correctly: USER1;Windows 10 Enterprise;20201128 06:05:27,862;5;0;File not accessible ('90s'). Error: 'Try again'. HOWEVER: The MESSAGE field on the left hand side under "Interesting fields" still contains double quotes. Despite the event above where all double quotes is either changed or removed, the MESSAGE field will have the value like this: "File not accessible ("90s"). Error: "Try again"."   - This seem to indicate that whatever i try Splunk will extract the field before any change i do and therefore make it near impossible to fix this issue. I have not set INDEXED_EXTRACTION = CSV and have left KV_MODE = none to try to keep Splunk from extracting fields at both indexing and search time, but it still extracts the fields. My guess is that this has to do with the header. As long as Splunk is looking for and using the header in the CSV file it will also auto-extract the fields. I have not found a way to change this behavior.  In my desperation i also tried to write a transforms stanza to remove the header, then remove the HEADER_FIELD_DELIMITER and HEADER_FIELD_QUOTE settings,  but STILL Splunk is extracting the header and the fields that goes with them.  There is the HEADER_FIELD_LINE_NUMBER setting, but this can't be set to "none" and defaults to 0. Even if i set this to HEADER_FIELD_LINE_NUMBER = 1 and have the transforms to remove the header, Splunk extracts the HEADER fields automatically.  Does anyone have any tips for fixing this issue? What i want is the events to be broken down correctly with 1 event per line and to have the MESSAGE field values without quotation marks. The obvious and best solution would be for the system to change it's logger and never include multiple double quotation marks in the MESSAGE text, but this is not possible i'm afraid, so i'm stuck trying to find a solution in Splunk. Note again that this is a single instance Splunk Enterprise installation with no forwarder (local input).
Hi, I am trying to connect my database using splunk DB connect. and I am getting an error stating:  Communications link failure The last packet successfully received from the server was 2 millisec... See more...
Hi, I am trying to connect my database using splunk DB connect. and I am getting an error stating:  Communications link failure The last packet successfully received from the server was 2 milliseconds ago. The last packet sent successfully to the server was 2 milliseconds ago. Mysql Database Version: 8.0.22 Linux Version: 20.04 (where mysql is hosted) Splunk DB Connect version: 3.4.1 Splunk Enterprise version: 8.1.1 Both Splunk machine and mysql server are on the same subnet.  Any suggestions/solutions would help!! Thanks
Hi Folks, Can anybody help me creating a button in a drilldown table, which on clicks should send an email to a particular/group id's. OR on clicking on button it can pop-up to enter a mail id so t... See more...
Hi Folks, Can anybody help me creating a button in a drilldown table, which on clicks should send an email to a particular/group id's. OR on clicking on button it can pop-up to enter a mail id so that we can enter the mail id which send the table's data to them. I saw few answers in which Javascript is mentioned which I'm unaware of how to proceed with it. If anyone can help me in the steps. Thankyou!
Hi everyone, I have to implement a use case for a customer which basically means, monitoring AD events of ~10 Domain Controllers. Based on the documentation: Monitor Active Directory - Splunk Docume... See more...
Hi everyone, I have to implement a use case for a customer which basically means, monitoring AD events of ~10 Domain Controllers. Based on the documentation: Monitor Active Directory - Splunk Documentation I'm able to monitor multiple domain controllers with one inputs.conf.  Would that mean, that I don't have to install on all those 10 Domain Controller the UF? And by installing the UF on the DC, do I have to chose a domain user with AD reading right instead of a local service user? Thank you!
Dear AppDynamics community, We are looking for ways to upload source maps in order to see full stacktraces for errors and crashes in our React Native mobile apps. Please let us know if there is a w... See more...
Dear AppDynamics community, We are looking for ways to upload source maps in order to see full stacktraces for errors and crashes in our React Native mobile apps. Please let us know if there is a way to do so? For reference, here is how its done in other solutions/services: https://docs.bugsnag.com/platforms/react-native/react-native/showing-full-stacktraces https://docs.sentry.io/platforms/react-native/sourcemaps/  Thanks in advance!