All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After upgrading our universal forwarder to 9.0.1, it started crashing almost everyday. I looked at the splunkd.log and saw these errors: 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOut... See more...
After upgrading our universal forwarder to 9.0.1, it started crashing almost everyday. I looked at the splunkd.log and saw these errors: 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5669 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5669 11-09-2022 10:48:18.423 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5670 11-09-2022 10:48:18.423 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5670 11-09-2022 10:48:18.429 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5677 11-09-2022 10:48:18.429 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5677 How do I know what's causing these errors?
Hi Folks, I can't see what would have caused the false alert to triggered: when I checked this directory I can see plenty of space : Size: 500g   Used: 9.6g   Avail: 491g  use%: 2% the query lo... See more...
Hi Folks, I can't see what would have caused the false alert to triggered: when I checked this directory I can see plenty of space : Size: 500g   Used: 9.6g   Avail: 491g  use%: 2% the query looks like this: index=a sourcetype=b  MountedON="d" PercentUsedSpace >  90 | stats latest(PercentUsedSpace) as PercentUsedSpace latest(Avail) as Avail latest(Used) as Used latest(UsePct) as UsePct by MountedON | fields MountedON UsePct Used Avail | rename MountedON as "Mount" UsePct as "Percent Used" Used as "Used Space" Avail as "Available Space"  
Hello! I have a field called "Customers Email" and I wanted to get a count of all the emails that end in .gov, .edu, .org and so on. I am using the eval and stats count functions to do this; however,... See more...
Hello! I have a field called "Customers Email" and I wanted to get a count of all the emails that end in .gov, .edu, .org and so on. I am using the eval and stats count functions to do this; however, my results show up with values of 0 for each type of email. Since wildcards do not work with eval, I put the wildcards like ".*..gov" so that it would just look at the .gov etc. of each email. This is my search:   | stats count(eval("Customers Email" = ".*..gov")) as ".gov", count(eval("Customers Email" = ".*..org")) as ".org", count(eval("Customers Email" = ".*..com")) as ".com", count(eval("Customers Email" = ".*..edu")) as ".edu", count(eval("Customers Email" = ".*..us")) as ".us", count(eval("Customers Email" = ".*..net")) as ".net"     This is the output I get from running this search: Is there a reason why I am getting a count of 0?
Having some trouble blacklisting a folder that has multiple dynamic subfolders and files. I want to blacklist everything for dir1 including files and any subfolders which are created dynamically. Spl... See more...
Having some trouble blacklisting a folder that has multiple dynamic subfolders and files. I want to blacklist everything for dir1 including files and any subfolders which are created dynamically. Splunk 8.x host is Linux. I want to blacklist everything here   /var/log/dir1 Example paths /var/log/dir1/file1.log /var/log/dir1/dir2/otherfile.log Currently trying this syntax, but it's not working. I do have another blacklist item that seems to be working and it is blacklist2 which is why I'm numbering the blacklists. blacklist1 = .*dir1.* blacklist2 = otheritem
I am sending logs to a non-splunk server using syslog udp from the heavy forwarders which works fine. But recently the remote non-splunk server went down and the heavy forwarders were not able to rea... See more...
I am sending logs to a non-splunk server using syslog udp from the heavy forwarders which works fine. But recently the remote non-splunk server went down and the heavy forwarders were not able to reach it. As a result, there were multiple queues build-up which used up all of the resources to the point that all the existing log ingestion stopped on the heavy forwarders. Also, some of the heavy forwarders reported Splunk service not running. Is there a way to prevent this from happening again in the future? What I want to make sure is if the remote server goes down in the future, the queues does not build up and the resources are not exhausted so that log ingestion still works?    
Hi all. My company is working with GlobalScope and I wish to enter their error code description to Splunk. As of right now, I only get the error number and I need to go to their website and check w... See more...
Hi all. My company is working with GlobalScope and I wish to enter their error code description to Splunk. As of right now, I only get the error number and I need to go to their website and check what is each code. I was wondering if I can import the data from the website into my Splunk and include it on my queries. Here's the url: https://kb.globalscape.com/Knowledgebase/10142/FTP-Status-and-Error-Codes
I am you planning to use DB Connect for a data enrichment (I want to create a lookup using dbxquery for a dymanic dataset that needs to be constantly updated to use at search time). We already have D... See more...
I am you planning to use DB Connect for a data enrichment (I want to create a lookup using dbxquery for a dymanic dataset that needs to be constantly updated to use at search time). We already have DB Connect on our HFs, but for the input of data from DBs. I want to install DBconnect on our search head just for the purpose of data enrichment of this one lookup table.  I have seen a couple answers online about how this could/would work and that it is fine just as long as it is for data enrichment, but I was hoping for a little more guidance or advice on people's thoughts related to this. I have thought about external lookups and onboarding the data, but this just seems like the easiest and most streamlined way to do this.  Thanks in advance!  
Hello!  I have a csv file where there are two fields called "Customers First Name" and "Customers Last Name".  I was wondering if there is a way to combine the values of each field in the same row ... See more...
Hello!  I have a csv file where there are two fields called "Customers First Name" and "Customers Last Name".  I was wondering if there is a way to combine the values of each field in the same row into one new field.   Thank you!    
Hello friends! I am working on a dashboard in Dashboard Studio and I would like to use a token collected from a dropdown input for the 'refresh' property in the code. However, whenever I enter the ... See more...
Hello friends! I am working on a dashboard in Dashboard Studio and I would like to use a token collected from a dropdown input for the 'refresh' property in the code. However, whenever I enter the token value into the 'refresh'  property and click 'Back', the dashboard just shows a white screen and never loads again. Although the code editor accepts the code, it appears to crash the dashboard. I am trying to use the token in the default options like so:     "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refresh": "$interval$", "refreshType": "delay" } } } },     Here's my code for the dropdown:     "input_wdjDPusg": { "options": { "items": [ { "label": "1 Minute", "value": "1m" }, { "label": "5 Minutes", "value": "5m" }, { "label": "10 Minutes", "value": "10m" } ], "token": "interval", "defaultValue": "1m" }, "title": "Refresh Rate", "type": "input.dropdown" }      Is anyone else able to recreate this issue, or able to tell me what I'm doing wrong? Thank you!
Hi Splunk Community, I am working on a regex to filter the sources I am getting from logs. I am trying to drop everything after the last "/" in the field but I am having problems filtering. My cu... See more...
Hi Splunk Community, I am working on a regex to filter the sources I am getting from logs. I am trying to drop everything after the last "/" in the field but I am having problems filtering. My current sources look something like: /db2audit/fs01/db2inst1/extract/abc123/file.del I am trying to filter it to look like: /db2audit/fs01/db2inst1/extract/abc123 Thanks in advance!
Can't seem to get this lookup(KVstore) to function. The dataset is from active directory in some cases in the same event the user field isn't populated with the sam account data but rather with the ... See more...
Can't seem to get this lookup(KVstore) to function. The dataset is from active directory in some cases in the same event the user field isn't populated with the sam account data but rather with the accounts sid. I'm trying to enrich the data by applying a lookup that has many fields of information regarding each user so that a table has useable information from the event such as the user name. below is the search query:   index=wineventlog source=WinEventLog:Security [| inputlookup AD_Audit_Change_EventCodes WHERE change_category="Group Membership" | stats values(EventCode) AS EventCode by obj_type | format | table search] src_user_type="user" | rex field=member_obj_sam "(?<account_sid>\C-\C-\C-\C\C-[0-9]+-[0-9]+-[0-9]+-[0-9]+)" | lookup AD_Obj_User sAMAccountName as src_user output displayName as "Admin Display Name" | lookup AD_Obj_User sid_lookup as account_sid output displayName as "Account Display Name" | lookup AD_Obj_User sAMAccountName as member output displayName as "Member Display Name" | lookup CORP_Monitored_Security_Groups_def CORP_group_name as Group_Name output CORP_group_name | search CORP_group_name=* | table _time, "Admin Display Name", src_user, Group_Name, msad_action, member_obj_sam, "Member Display Name", MSADGroupClass, MSADGroupType, src_nt_domain, host | rename src_user as "Admin Account",MSADGroupClass as "Type",MSADGroupType as "Scope",src_nt_domain as "Domain",Group_Name as "Group Modified",msad_action as "Action",member_obj_sam as "Member" | sort -_time   The lookups and rex: #This works correctly and a new field(account_sid) has the expected data: e.g s-0-0-00-0000000000-0000000000-00000000-000000   | rex field=member_obj_sam "(?<account_sid>\C-\C-\C-\C\C-[0-9]+-[0-9]+-[0-9]+-[0-9]+)"   #this works as expected   | lookup AD_Obj_User sAMAccountName as src_user output displayName as "Admin Display Name"   # this does not work!   | lookup AD_Obj_User sid_lookup as account_sid output displayName as "Account Display Name"   #this works as expected   | lookup AD_Obj_User sAMAccountName as member output displayName as "Member Display Name"   #if i perform the following i get results i expect in a new search window   | inputlookup AD_Obj_User | search sid_lookup="s-0-0-00-0000000000-0000000000-00000000-000000"   I'm not sure if I've met a lookup limit or if there is an obvious error in the query itself, but i can't see anything in offical litreture as to what is going wrong
I want our operations folks to be able to quickly see which unusual log messages have started showing up. That is rather than wading through lots of messages that are typical, I want them to find t... See more...
I want our operations folks to be able to quickly see which unusual log messages have started showing up. That is rather than wading through lots of messages that are typical, I want them to find the recent unusual ones. Is this a job for splunk's anomaly detection in the MLTK? thanks
Hi Splunkers, I have a doubt about Reports result export, on a Splunk Cloud SaaS. Suppose I have a Scheduled Report and I want to save results in a certain format (pdf, csv and so on). What abou... See more...
Hi Splunkers, I have a doubt about Reports result export, on a Splunk Cloud SaaS. Suppose I have a Scheduled Report and I want to save results in a certain format (pdf, csv and so on). What about if I need to export it to a third party system/external server? Searching on Google and this community, I saw that the main option is to deploy a script that take the output of report and forward it outside Splunk. My question is: is this the only way or there are other options?
Hello there, i'm experiencing an error that i can't figure out how to fix.  Sometimes when I open my dashboard created with Dashboard Studio I recevie this error: I'm using Splunk 8.2.2, d... See more...
Hello there, i'm experiencing an error that i can't figure out how to fix.  Sometimes when I open my dashboard created with Dashboard Studio I recevie this error: I'm using Splunk 8.2.2, do anyone know what is going on? Thanks in advance.
Hi all. I'm working with a FTP server which include a session number with each status and I wish to exclude the session number to be separate value to use later. Example of the fields are: [123... See more...
Hi all. I'm working with a FTP server which include a session number with each status and I wish to exclude the session number to be separate value to use later. Example of the fields are: [12345156]quit [14365361]pass I tried using replace "[*]" with * in cs_status but it won't remove the session number (inside the [] is the session number). Basic search query: "index=application sourcetype=FTPlogs"     Thank you for the assistance!
Hello splunk team,   I'm getting the following error while trying to deploy new app on our SH cluster.        Error while creating deployable apps: Error compressing the temporary tarball... See more...
Hello splunk team,   I'm getting the following error while trying to deploy new app on our SH cluster.        Error while creating deployable apps: Error compressing the temporary tarball: /opt/splunk/var/run/splunk/deploy.1805b9b8294a5b90.tmp/apps/SplunkEnterpriseSecuritySuite.bundle: No space left on device       While I do understand the error, my challenge is that I can't increase further the /opt/splunk partition.  So my question for you guy is if it is possible to exclude SplunkEnterpriseSecuritySuite from the deployement without deleting it. In other words, can I tell splunk to just push a specific app without checking other app like we do it on HF for a specific Server Class? Kind regards,
Hi All, We are trying to install Apache Web Server Agent for IHS v9. We have done all configuration as per document. When we try to restart the web server, we are getting Exec format error.
Hi Team, I created table view visualization leveraging splunk js framework. But I have to apply back ground color based on dependent field value.  My requirement is, I have solarwinds,cmdb,kenna ... See more...
Hi Team, I created table view visualization leveraging splunk js framework. But I have to apply back ground color based on dependent field value.  My requirement is, I have solarwinds,cmdb,kenna fields. if cmdb !=solarwinds or kenna!=solarwinds then I have to apply background color as RED to the associated cell. (CMDB/KENNA). else background color should be green.  
Hi, I am trying to build a query where I need Job duration.  Each job could run multiple time and its start/end time is recorded in multiple lines. I remember using streamstats for this requirement ... See more...
Hi, I am trying to build a query where I need Job duration.  Each job could run multiple time and its start/end time is recorded in multiple lines. I remember using streamstats for this requirement but couldn't figure it out. Thanks
Could I restore data from dynamic data self storage (DDSS) to a Splunk Cloud instance? I know it isn't possible with splunk itself but could be achievable with lambda connected to splunk cloud HEC fo... See more...
Could I restore data from dynamic data self storage (DDSS) to a Splunk Cloud instance? I know it isn't possible with splunk itself but could be achievable with lambda connected to splunk cloud HEC for example? Is there any other tool that could do this?