All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to do a search to find IPs trying to login in using multiple usernames (using Duo).  I have it working very close to how I want the only issue is I need to filter out IP-user entries where... See more...
I'm trying to do a search to find IPs trying to login in using multiple usernames (using Duo).  I have it working very close to how I want the only issue is I need to filter out IP-user entries where there is only one username attempted.  Is there a way to do a simple where clause against the number of strings returned from values()?  I tried where values(username) > 1 but guess that would have been to simple.   index=duo  | stats values(username) as user, count(username) as attempts by src_ip | where attempts >1 | sort -attempts  
Hi, on our Splunk instance I have set a report using a time chart with a span of 1h and time frame of a day and the report is scheduled to run every hour however each time the report runs it shows ... See more...
Hi, on our Splunk instance I have set a report using a time chart with a span of 1h and time frame of a day and the report is scheduled to run every hour however each time the report runs it shows different results. Just wondered if anyone has seen this before?   thanks,   joe
Hi, I developed a modular input making use of Python Cryptodome library (https://pycryptodome.readthedocs.io). When executing it on a Mac OsX Ventura, it raises the error:     ... _raw_ecb.abi3... See more...
Hi, I developed a modular input making use of Python Cryptodome library (https://pycryptodome.readthedocs.io). When executing it on a Mac OsX Ventura, it raises the error:     ... _raw_ecb.abi3.so' not valid for use in process: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed.     When executing the same code with a brew-installed python3.7, the code runs fine. Minimal example     # create and activate a virtual environment: python3.7 -m venv venv source venv/bin/activate # install necessary lib python3.7 -m pip install pycryptodomex # exit the virtual env deactivate # move to where the packages have been stored cd venv/lib/python3.7/site-packages     Test #1 Execute "python3.7", and then type:          from Cryptodome.Cipher import AES ---> no error is raised Test #2 Start the python3 interpreter bundled in splunk        splunk cmd python3     >>> from Cryptodome.Cipher import AES OSError: Cannot load native module 'Cryptodome.Cipher._raw_ecb': Not found '_raw_ecb.cpython-37m-darwin.so', Cannot load '_raw_ecb.abi3.so' ... .../_raw_ecb.abi3.so' not valid for use in process: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed.), Not found '_raw_ecb.so'      I found this interesting article about similar problems on Inkscape: https://gitlab.com/inkscape/inkscape/-/issues/2299 and then I executed:     codesign -d --entitlements - /Applications/Splunk/bin/python3.7m     Executable=/Applications/Splunk/bin/python3.7m    [Dict]    [Key] com.apple.security.cs.disable-executable-page-protection    [Value] [Bool] true There is no allowance for unsigned libraries, apparently.   I tried this with Splunk v8.2.7 and v9.0.2 on an Intel-based Mac OSx Ventura. Do you have any suggestions?
Looking to see if anyone has successfully migrated UBA nodes from Ubuntu to RHEL? We have several nodes that are EOL for Ubuntu and since Splunk is no longer supporting UBA post Ubuntu 18.  Ideally, ... See more...
Looking to see if anyone has successfully migrated UBA nodes from Ubuntu to RHEL? We have several nodes that are EOL for Ubuntu and since Splunk is no longer supporting UBA post Ubuntu 18.  Ideally, we would be able to migrate the databases in the migration.  I did look at the documentation for migration/upgrades, unfortunately it specifically calls out in the migration tasks the OS's mush be the same. 
I have the following query with multiple joins and using max=0 which is not giving me all results as I think the size becomes more and its unable to take the load.How best can I optimize my query   ... See more...
I have the following query with multiple joins and using max=0 which is not giving me all results as I think the size becomes more and its unable to take the load.How best can I optimize my query     index=syslog (process=*epilog* "*slurm-epilog: START user*") OR (process=*prolog* "*slurm-prolog: END user*") host IN (preo*) timeformat="%Y-%m-%dT%H:%M:%S.%6N" | rex field=_raw "(?<epilog_start>[^ ]+)\-\d{2}\:\d{2}.*slurm\-epilog\:\s*START\suser\=(?<user>[^\s]+)\sjob\=(?<job_id>[^ ]+)" | rex field=_raw "(?<prolog_end>[^ ]+)\-\d{2}\:\d{2}.*slurm\-prolog\:\s*END\suser\=(?<user>[^\s]+)\sjob\=(?<job_id>[^ ]+)" | stats values(epilog_start) as epilog_start values(prolog_end) as prolog_end first(_time) as _time by host user job_id | search prolog_end!="" | search user=* | search job_id=* | eval current_time = now() | join host max=0 type=left [ search index=syslog "Linux version" host IN (preos*) | rex field=_raw "(?<reboot_time>[^ ]+)\-\d{2}\:\d{2}.*kernel:\s.*Linux\sversion" | stats count by reboot_time host ] | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(prolog_end) as job_start | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(epilog_start) as epilog_start | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(reboot_time) as reboot_time | eval diff_reboot = reboot_time - job_start | eval reboot_time = if(diff_reboot<0, "", reboot_time) | eval diff_reboot = if(diff_reboot<0, "", diff_reboot) | eval job_end = if(epilog_start!="",epilog_start,if(diff_reboot>0,reboot_time,current_time)) | eval diff_reboot = if(diff_reboot!="",diff_reboot,10000000000) | sort host user job_id prolog_end diff_reboot | dedup host user job_id prolog_end | join host max=0 type=left [ search index=syslog (("NVRM: Xid*") process=kernel) host IN (preos*) | rex field=_raw "(?<kernel_xid>[^ ]+)\-\d{2}\:\d{2}.*NVRM\:\sXid\s*\(PCI\:(?<PCIe_Bus_Id>[^ ]+)\)\:\s*(?<Error_Code>[^ ]+)\,\spid\=(?<pid>[^ ]+)\,\s*name\=(?<name>[^ ]+)\,\s(?<Log_Message>.*)" | stats count by host kernel_xid PCIe_Bus_Id pid name Error_Code Log_Message | search Error_Code="***" ] | search kernel_xid!="" | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(kernel_xid) as xid_time | eval diff = xid_time - job_start | eval diff2 = job_end - xid_time | search diff>0 AND diff2>0 | eval job_start = strftime(job_start,"%Y-%m-%d %H:%M:%S.%3N") | eval job_end = if(job_end==current_time,"N/A",strftime(job_end,"%Y-%m-%d %H:%M:%S.%3N")) | eval xid_time = strftime(xid_time,"%Y-%m-%d %H:%M:%S.%3N") | eval current_time = strftime(current_time,"%Y-%m-%d %H:%M:%S.%3N") | eval reboot_time = strftime(reboot_time,"%Y-%m-%d %H:%M:%S.%3N") | join user job_id type=left [ search index="slurm-jobs" | stats count by job_id job_name user nodelist time_start time_end state submit_line ] | eval _time=strptime(xid_time,"%Y-%m-%d %H:%M:%S.%3N") | stats count by host user job_id job_start job_end xid_time Error_Code PCIe_Bus_Id pid name Log_Message job_name nodelist state submit_line | dedup host     Below are the sample logs Events First search 2022-11-08T14:59:53.134550-08:00 preos slurm-epilog: START user=abc job=62112 2022-11-08T14:58:25.101203-08:00 preos slurm-prolog: END user=abc job=62112 Subsearch after join events: 2022-11-09T12:49:51.395174-08:00 preos kernel: [ 0.000000] Linux version hjhc (buildd@lcy02-amd64-032) (gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 (Ubuntu 5.15.0-52.58-generic 5.15.60) Events from second join: 2022-11-09T12:35:15.422001-08:00 preos kernel: [ 99.166912] NVRM: Xid (PCI:00:2:00): 95, pid='<unknown>', name=<unknown>, Uncontained: FBHUB. RST: Yes, D-RST: No Events from last join: 2022-11-09 20:50:02.000, mod_time="1668027082", job_id="62112", job_name="rfm_nvcp_bek_cd_job", user="abc", account="admin", partition="vi2", qos="all", resv="YES", timelimit_minutes="10", work_dir="/sbatch/logs/2022-11-09T11-30-39/preos/viking-hbm2/builtin/nvcomp_benchmark_cascaded", submit_line="sbatch rfm_nvcomp_benchmark_cascaded_job.sh", time_submit="2022-11-09 12:11:13", time_eligible="2022-11-09 12:11:13", time_start="2022-11-09 12:50:02", time_end="2022-11-09 12:51:22", state="COMPLETED", exit_code="0", nodes_alloc="1", nodelist="preos0093", submit_to_start_time="00:38:49", eligible_to_start_time="00:38:49", start_to_end_time="00:01:20"     Thanks in Advance 
Constant Memory growth with Universal Forwarder with ever increasing channels. Once third party receiver is restarted, UF re-sends lot of duplicate data and frees up channels.
Hi I have challenge that need to know how with splunk, math, statistics, ... able to solve it. Here is the log: sample:   2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   20 2022-10... See more...
Hi I have challenge that need to know how with splunk, math, statistics, ... able to solve it. Here is the log: sample:   2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   20 2022-10-21 13:19:23:120   9999 2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   0 2022-10-21 13:19:23:120   40   FYI1:don't want to summerize or get avrage of values. FYI2:between each second contain over thousands of data points. Need to find abnormal in last 24 hours. FYI3:I need optimise solution because there is a lot of data. FYI4:don't want to set threshold e.g. 100 to filter only above that value, because abnormal situation might occur under that value. FYI5:there is no constant value to detect this abnormal e.g check 10 vlue if increase set as abnormal! How can i find 1 incremental or 2 constant rate points of last 24 hours? (show on below chart) Does not need to show this on chart because of lots of data points. Just list them.   Any idea? Thanks 
I have a working search that uses a look up, that is like this: index=MyIndex [| inputlookup MyCSVFile | stats values(email) AS EmailAddress | format] |chart count(Code) as NumCodes ... See more...
I have a working search that uses a look up, that is like this: index=MyIndex [| inputlookup MyCSVFile | stats values(email) AS EmailAddress | format] |chart count(Code) as NumCodes over EmailAddress |sort -NumCodes This works, but there are duplicate codes, so i want the search to count only unique codes per user. I am not sure how to say Count Unique. Thank you for your help!!  
After upgrading our universal forwarder to 9.0.1, it started crashing almost everyday. I looked at the splunkd.log and saw these errors: 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOut... See more...
After upgrading our universal forwarder to 9.0.1, it started crashing almost everyday. I looked at the splunkd.log and saw these errors: 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5669 11-09-2022 10:48:18.422 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5669 11-09-2022 10:48:18.423 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5670 11-09-2022 10:48:18.423 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5670 11-09-2022 10:48:18.429 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5677 11-09-2022 10:48:18.429 -0500 ERROR TcpOutputQ [25141 TcpOutEloop] - Unexpected event id=5677 How do I know what's causing these errors?
Hi Folks, I can't see what would have caused the false alert to triggered: when I checked this directory I can see plenty of space : Size: 500g   Used: 9.6g   Avail: 491g  use%: 2% the query lo... See more...
Hi Folks, I can't see what would have caused the false alert to triggered: when I checked this directory I can see plenty of space : Size: 500g   Used: 9.6g   Avail: 491g  use%: 2% the query looks like this: index=a sourcetype=b  MountedON="d" PercentUsedSpace >  90 | stats latest(PercentUsedSpace) as PercentUsedSpace latest(Avail) as Avail latest(Used) as Used latest(UsePct) as UsePct by MountedON | fields MountedON UsePct Used Avail | rename MountedON as "Mount" UsePct as "Percent Used" Used as "Used Space" Avail as "Available Space"  
Hello! I have a field called "Customers Email" and I wanted to get a count of all the emails that end in .gov, .edu, .org and so on. I am using the eval and stats count functions to do this; however,... See more...
Hello! I have a field called "Customers Email" and I wanted to get a count of all the emails that end in .gov, .edu, .org and so on. I am using the eval and stats count functions to do this; however, my results show up with values of 0 for each type of email. Since wildcards do not work with eval, I put the wildcards like ".*..gov" so that it would just look at the .gov etc. of each email. This is my search:   | stats count(eval("Customers Email" = ".*..gov")) as ".gov", count(eval("Customers Email" = ".*..org")) as ".org", count(eval("Customers Email" = ".*..com")) as ".com", count(eval("Customers Email" = ".*..edu")) as ".edu", count(eval("Customers Email" = ".*..us")) as ".us", count(eval("Customers Email" = ".*..net")) as ".net"     This is the output I get from running this search: Is there a reason why I am getting a count of 0?
Having some trouble blacklisting a folder that has multiple dynamic subfolders and files. I want to blacklist everything for dir1 including files and any subfolders which are created dynamically. Spl... See more...
Having some trouble blacklisting a folder that has multiple dynamic subfolders and files. I want to blacklist everything for dir1 including files and any subfolders which are created dynamically. Splunk 8.x host is Linux. I want to blacklist everything here   /var/log/dir1 Example paths /var/log/dir1/file1.log /var/log/dir1/dir2/otherfile.log Currently trying this syntax, but it's not working. I do have another blacklist item that seems to be working and it is blacklist2 which is why I'm numbering the blacklists. blacklist1 = .*dir1.* blacklist2 = otheritem
I am sending logs to a non-splunk server using syslog udp from the heavy forwarders which works fine. But recently the remote non-splunk server went down and the heavy forwarders were not able to rea... See more...
I am sending logs to a non-splunk server using syslog udp from the heavy forwarders which works fine. But recently the remote non-splunk server went down and the heavy forwarders were not able to reach it. As a result, there were multiple queues build-up which used up all of the resources to the point that all the existing log ingestion stopped on the heavy forwarders. Also, some of the heavy forwarders reported Splunk service not running. Is there a way to prevent this from happening again in the future? What I want to make sure is if the remote server goes down in the future, the queues does not build up and the resources are not exhausted so that log ingestion still works?    
Hi all. My company is working with GlobalScope and I wish to enter their error code description to Splunk. As of right now, I only get the error number and I need to go to their website and check w... See more...
Hi all. My company is working with GlobalScope and I wish to enter their error code description to Splunk. As of right now, I only get the error number and I need to go to their website and check what is each code. I was wondering if I can import the data from the website into my Splunk and include it on my queries. Here's the url: https://kb.globalscape.com/Knowledgebase/10142/FTP-Status-and-Error-Codes
I am you planning to use DB Connect for a data enrichment (I want to create a lookup using dbxquery for a dymanic dataset that needs to be constantly updated to use at search time). We already have D... See more...
I am you planning to use DB Connect for a data enrichment (I want to create a lookup using dbxquery for a dymanic dataset that needs to be constantly updated to use at search time). We already have DB Connect on our HFs, but for the input of data from DBs. I want to install DBconnect on our search head just for the purpose of data enrichment of this one lookup table.  I have seen a couple answers online about how this could/would work and that it is fine just as long as it is for data enrichment, but I was hoping for a little more guidance or advice on people's thoughts related to this. I have thought about external lookups and onboarding the data, but this just seems like the easiest and most streamlined way to do this.  Thanks in advance!  
Hello!  I have a csv file where there are two fields called "Customers First Name" and "Customers Last Name".  I was wondering if there is a way to combine the values of each field in the same row ... See more...
Hello!  I have a csv file where there are two fields called "Customers First Name" and "Customers Last Name".  I was wondering if there is a way to combine the values of each field in the same row into one new field.   Thank you!    
Hello friends! I am working on a dashboard in Dashboard Studio and I would like to use a token collected from a dropdown input for the 'refresh' property in the code. However, whenever I enter the ... See more...
Hello friends! I am working on a dashboard in Dashboard Studio and I would like to use a token collected from a dropdown input for the 'refresh' property in the code. However, whenever I enter the token value into the 'refresh'  property and click 'Back', the dashboard just shows a white screen and never loads again. Although the code editor accepts the code, it appears to crash the dashboard. I am trying to use the token in the default options like so:     "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refresh": "$interval$", "refreshType": "delay" } } } },     Here's my code for the dropdown:     "input_wdjDPusg": { "options": { "items": [ { "label": "1 Minute", "value": "1m" }, { "label": "5 Minutes", "value": "5m" }, { "label": "10 Minutes", "value": "10m" } ], "token": "interval", "defaultValue": "1m" }, "title": "Refresh Rate", "type": "input.dropdown" }      Is anyone else able to recreate this issue, or able to tell me what I'm doing wrong? Thank you!
Hi Splunk Community, I am working on a regex to filter the sources I am getting from logs. I am trying to drop everything after the last "/" in the field but I am having problems filtering. My cu... See more...
Hi Splunk Community, I am working on a regex to filter the sources I am getting from logs. I am trying to drop everything after the last "/" in the field but I am having problems filtering. My current sources look something like: /db2audit/fs01/db2inst1/extract/abc123/file.del I am trying to filter it to look like: /db2audit/fs01/db2inst1/extract/abc123 Thanks in advance!
Can't seem to get this lookup(KVstore) to function. The dataset is from active directory in some cases in the same event the user field isn't populated with the sam account data but rather with the ... See more...
Can't seem to get this lookup(KVstore) to function. The dataset is from active directory in some cases in the same event the user field isn't populated with the sam account data but rather with the accounts sid. I'm trying to enrich the data by applying a lookup that has many fields of information regarding each user so that a table has useable information from the event such as the user name. below is the search query:   index=wineventlog source=WinEventLog:Security [| inputlookup AD_Audit_Change_EventCodes WHERE change_category="Group Membership" | stats values(EventCode) AS EventCode by obj_type | format | table search] src_user_type="user" | rex field=member_obj_sam "(?<account_sid>\C-\C-\C-\C\C-[0-9]+-[0-9]+-[0-9]+-[0-9]+)" | lookup AD_Obj_User sAMAccountName as src_user output displayName as "Admin Display Name" | lookup AD_Obj_User sid_lookup as account_sid output displayName as "Account Display Name" | lookup AD_Obj_User sAMAccountName as member output displayName as "Member Display Name" | lookup CORP_Monitored_Security_Groups_def CORP_group_name as Group_Name output CORP_group_name | search CORP_group_name=* | table _time, "Admin Display Name", src_user, Group_Name, msad_action, member_obj_sam, "Member Display Name", MSADGroupClass, MSADGroupType, src_nt_domain, host | rename src_user as "Admin Account",MSADGroupClass as "Type",MSADGroupType as "Scope",src_nt_domain as "Domain",Group_Name as "Group Modified",msad_action as "Action",member_obj_sam as "Member" | sort -_time   The lookups and rex: #This works correctly and a new field(account_sid) has the expected data: e.g s-0-0-00-0000000000-0000000000-00000000-000000   | rex field=member_obj_sam "(?<account_sid>\C-\C-\C-\C\C-[0-9]+-[0-9]+-[0-9]+-[0-9]+)"   #this works as expected   | lookup AD_Obj_User sAMAccountName as src_user output displayName as "Admin Display Name"   # this does not work!   | lookup AD_Obj_User sid_lookup as account_sid output displayName as "Account Display Name"   #this works as expected   | lookup AD_Obj_User sAMAccountName as member output displayName as "Member Display Name"   #if i perform the following i get results i expect in a new search window   | inputlookup AD_Obj_User | search sid_lookup="s-0-0-00-0000000000-0000000000-00000000-000000"   I'm not sure if I've met a lookup limit or if there is an obvious error in the query itself, but i can't see anything in offical litreture as to what is going wrong
I want our operations folks to be able to quickly see which unusual log messages have started showing up. That is rather than wading through lots of messages that are typical, I want them to find t... See more...
I want our operations folks to be able to quickly see which unusual log messages have started showing up. That is rather than wading through lots of messages that are typical, I want them to find the recent unusual ones. Is this a job for splunk's anomaly detection in the MLTK? thanks