All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello community! For a project I'm working on, I would need to split a "multivalues single event" into "multiple single value events". Let me try to represent it for clarity: As is: Event: A ... See more...
Hello community! For a project I'm working on, I would need to split a "multivalues single event" into "multiple single value events". Let me try to represent it for clarity: As is: Event: A B C null D E F G H I L Desired: Event1: A B C null D Event2: A B E F G Event3: A B H I L  Is it possible to achieve this?  I played around with mvexpand and mvzip but I wasn't able to reach my goal. Thanks in advance for your kind support
Good afternoon, I have already raised a similar topic. The last time I was cleared up the situation, but the problem has not been resolved. I send messages to /services/collector/raw And splunk tak... See more...
Good afternoon, I have already raised a similar topic. The last time I was cleared up the situation, but the problem has not been resolved. I send messages to /services/collector/raw And splunk takes the value from my "eventTime" field: "1985-04-12T23:21:15Z", and substitutes _time, which breaks my normal search and the normal operation of alerts. sourcetype in my case substitutes splunk itself when checking the token. I tried adjusting the sourcetype but that didn't help. In the screenshot, I displayed what I tried to do. Unfortunately, we do not have the ability to always send messages to the Event (( Message example: curl --location --request POST 'http://10.10.10.10:8088/services/collector/raw' --header 'Authorization: Splunk a2-b5-48-9ff7-95r 'Content-Type: text/plain' --data-raw '{ "messageType": "RABIS", "eventTime": "1985-04-12T23:21:15Z", "messageId": "ED280816-E404-444A-A2D9-FFD2D171F447" }'
Hello, For starter, I'm an amateur in regex query, so I use Field Extraction, but it's very clunky and cannot extract all the fields I want and also sometime have wrong extraction. The event that I... See more...
Hello, For starter, I'm an amateur in regex query, so I use Field Extraction, but it's very clunky and cannot extract all the fields I want and also sometime have wrong extraction. The event that I want to extract is 2022-11-09 17:36:05 BANK_CITAD_ID="79303001", SOTIEN_CONLAI="150000000", UPDATED_DATE="2022-11-09 17:36:05.0", FILE_NAME="GTCG_dinhky_20221109.xlsx", STATUS="DATA_ERROR", ERROR_MSG="NOT_ALLOW_LIMIT", SOTIENTANG="0", SOTIENGIAM="0", LOAI_FILE="DK", STT="2", BANK_CODE="STB", ID="6829", NOTE="DC1:2286754104070,1,10/11/2022 00:00:00;DC2:10000000000,1501000000,10/11/2022 00:00:00,1000001;DEL: 1501000001,10/11/2022 00:00:00;DEL: 1501000001,10/11/2022 00:00:00;" There are a total of 8 fields that I want to extract, the field name can be field1, field2..., at least that I can handle. Thank you in advance.  
Hello community, I have a query returning result with an IP address value (src_ip). I used to add a line to match some Range IP :  | where cidrmatch("Range IP", src_ip)   Now I have many other r... See more...
Hello community, I have a query returning result with an IP address value (src_ip). I used to add a line to match some Range IP :  | where cidrmatch("Range IP", src_ip)   Now I have many other range IP to add. Instead of adding many lines, I created a CSV lookup with all these ranges. range_ip | comment ----------------------- 10.0.0.0/8 | range1 11.0.0.0/8 | range2 12.0.0.0/8 | range3   Do you have any idea how can I filter my result using CIDRMATCH function and based on range_ip column of my lookup CSV.    Something like : | where cidrmatch( range_ip IN lookup.csv, src_ip)     Thanks 
Hi, I have a network background and know a little bit syslog and using the Splunk interface. This is however the first time I'm installing a new Splunk instance. I've started up my Splunk cloud t... See more...
Hi, I have a network background and know a little bit syslog and using the Splunk interface. This is however the first time I'm installing a new Splunk instance. I've started up my Splunk cloud trial and spun up SC4S via podman (version 2.39.0) on RHEL 8.2. It seems to be running fine, SC4S messages arriving in Splunk Cloud - - syslog-ng 149 - [meta sequenceId="1"]syslog-ng starting up; version='3.36.1' host = splunk-sc4ssource = sc4ssourcetype = sc4s:events I'm sending syslog from my test ASA to SC4S and with a tcpdump I can see it coming in: 07:46:13.658608 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.debug, length: 101 07:46:13.735897 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 183 07:46:13.962147 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 155 07:46:16.550565 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 166 Problem is, in Splunk I'm only getting http errors which seems to come from the above syslog messages forwarded by SC4S: - syslog-ng 149 - [meta sequenceId="18"]curl: error sending HTTP request; url='https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event', error='Timeout was reached', worker_index='2', driver='d_hec_fmt#0', location='root generator dest_hec:5:5' host = splunk-sc4ssource = sc4ssourcetype = sc4s:events  Things I've configured on SC4S, only the basics: /etc/sysctl.conf net.core.rmem_default = 17039360 net.core.rmem_max = 17039360 net.ipv4.ip_forward = 1 /opt/sc4s/env_file SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://prd-p-xxxxx.splunkcloud.com:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-9c3c-4918-8eb3-xxxxxxxxxxxxx #Uncomment the following line if using untrusted SSL certificates SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no In splunk I've created the HEC token: I've created the Indexes like this, looking at this everything seems to fall in the "main" index instead of "lastchangeindex", looks like that's not right, is it? I must be missing something obvious here because this should be straightforward right? Appreciate any input on this, thanks!  
Good Morning, I’m looking for the specification of the S2S protocol. We have some trouble with getting splunk uf data through a Palo Alto Firewall.  the firewall has a Application Profile Engine... See more...
Good Morning, I’m looking for the specification of the S2S protocol. We have some trouble with getting splunk uf data through a Palo Alto Firewall.  the firewall has a Application Profile Engine. So it not just looks at layer 4 for IP/Port and Protocol but also certain Aspekts of the packet like headers…. There is no predefined profile for splunk S2S so we need to create it, but I can‘t find the docs for Protocol definition.  We are aware of the fact that it is possible to bypass this App engine and do a layer 4 filtering. But it would be the only application in the company to do that and the fact that it would be for a Internet facing service and a security service seams  off. 🫣 We where adviced to use S2S over HEC for Stability reasons if it does not work out we have to switch but for now we want to try make it work. 
so i created two new indexes on the cluster master and successfully deployed them to the indexing cluster and validated and also show status was successful. But after restarting the server it will no... See more...
so i created two new indexes on the cluster master and successfully deployed them to the indexing cluster and validated and also show status was successful. But after restarting the server it will not start on the GUI. I checked status and splunkd wasn't running. also on the indexers gui i had a message saying: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json manager=172.31.80.210:8089 rv=0 gotConnectionError=1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx status_line="Error connecting: Connection refused" socket_error="Connection refused" remote_error= [ event=addPeer status=retrying AddPeerRequest: { active_bundle_id=049FC0942259F88577FB49C2989F9432 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=4 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 guid=82EFC827-B30A-48D9-ADFA-A2EEC27DB3CA last_complete_generation_id=0 latest_bundle_id=049FC0942259F88577FB49C2989F9432 mgmt_port=8089 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=indexer_01 site=default splunk_version=8.2.4 splunkd_build_number=87e2dda940d1 status=Up } Batch 1/4 ].11/10/2022, 6:14:24 AM. I've been at it for over 
In search & reporting I get this error and no data from my search: Could not load lookup=LOOKUP-dropdowns. How do I fix this?
"Context":"{"user id":"jane.doe.sen", "Expense Date":"11/10/2022", How to use extract this rex command?      to come up result like  user id Expense Date jane.doe.sen 9/6/2022 ... See more...
"Context":"{"user id":"jane.doe.sen", "Expense Date":"11/10/2022", How to use extract this rex command?      to come up result like  user id Expense Date jane.doe.sen 9/6/2022 please help                                                                                              
Hi guys,   Phantom 4.10.7, I tried to delete containers older than 6 months via delete_containers.pyc and it confirmed counts of affected containers, artifacts and run records as expected, but af... See more...
Hi guys,   Phantom 4.10.7, I tried to delete containers older than 6 months via delete_containers.pyc and it confirmed counts of affected containers, artifacts and run records as expected, but after confirming the deletion and waiting for a few seconds until the command was done, I can still see the containers via UI. If I rerun the delete_containers command again with the same parameters, it says there is nothing there to be deleted. Anyone has any idea of what is going on? I need to housekeep the environment due to the surge of disk usage and there is no better way IMO as this one. Any suggestions are highly appreciated
I know I can use the "rest" command as in the link below to get the list of savedsearches. https://community.splunk.com/t5/Getting-Data-In/Is-there-any-way-to-list-all-the-saved-searches-in-Splunk/m... See more...
I know I can use the "rest" command as in the link below to get the list of savedsearches. https://community.splunk.com/t5/Getting-Data-In/Is-there-any-way-to-list-all-the-saved-searches-in-Splunk/m-p/267333 Since the "rest" command cannot be used in Splunk Cloud, I would like an SPL that can be listed without using that command. It seems that the "rest" command can also be used if i contact Cloud Support, but I don't want to use that command as much as possible! Best Regards.
Hello community, I hope you are doing well. I have a customer that wants to have a Version Control system with Splunk. And the option that I see is Git to be able of do all the Backup and Resto... See more...
Hello community, I hope you are doing well. I have a customer that wants to have a Version Control system with Splunk. And the option that I see is Git to be able of do all the Backup and Restore process. I tried with this app https://splunkbase.splunk.com/app/4355#/overview But this app doesn't work as I expected, because is not able of do the commit automatically and is just for save the Knowledge Objects in another location.   I would like to know if there is any app that helps doing the commit process to the remote repository or this must be done using an script, I would like to avoid the customer do it manually. Thanks for the inputs.   Best Regards.
I'm trying to do a search to find IPs trying to login in using multiple usernames (using Duo).  I have it working very close to how I want the only issue is I need to filter out IP-user entries where... See more...
I'm trying to do a search to find IPs trying to login in using multiple usernames (using Duo).  I have it working very close to how I want the only issue is I need to filter out IP-user entries where there is only one username attempted.  Is there a way to do a simple where clause against the number of strings returned from values()?  I tried where values(username) > 1 but guess that would have been to simple.   index=duo  | stats values(username) as user, count(username) as attempts by src_ip | where attempts >1 | sort -attempts  
Hi, on our Splunk instance I have set a report using a time chart with a span of 1h and time frame of a day and the report is scheduled to run every hour however each time the report runs it shows ... See more...
Hi, on our Splunk instance I have set a report using a time chart with a span of 1h and time frame of a day and the report is scheduled to run every hour however each time the report runs it shows different results. Just wondered if anyone has seen this before?   thanks,   joe
Hi, I developed a modular input making use of Python Cryptodome library (https://pycryptodome.readthedocs.io). When executing it on a Mac OsX Ventura, it raises the error:     ... _raw_ecb.abi3... See more...
Hi, I developed a modular input making use of Python Cryptodome library (https://pycryptodome.readthedocs.io). When executing it on a Mac OsX Ventura, it raises the error:     ... _raw_ecb.abi3.so' not valid for use in process: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed.     When executing the same code with a brew-installed python3.7, the code runs fine. Minimal example     # create and activate a virtual environment: python3.7 -m venv venv source venv/bin/activate # install necessary lib python3.7 -m pip install pycryptodomex # exit the virtual env deactivate # move to where the packages have been stored cd venv/lib/python3.7/site-packages     Test #1 Execute "python3.7", and then type:          from Cryptodome.Cipher import AES ---> no error is raised Test #2 Start the python3 interpreter bundled in splunk        splunk cmd python3     >>> from Cryptodome.Cipher import AES OSError: Cannot load native module 'Cryptodome.Cipher._raw_ecb': Not found '_raw_ecb.cpython-37m-darwin.so', Cannot load '_raw_ecb.abi3.so' ... .../_raw_ecb.abi3.so' not valid for use in process: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed.), Not found '_raw_ecb.so'      I found this interesting article about similar problems on Inkscape: https://gitlab.com/inkscape/inkscape/-/issues/2299 and then I executed:     codesign -d --entitlements - /Applications/Splunk/bin/python3.7m     Executable=/Applications/Splunk/bin/python3.7m    [Dict]    [Key] com.apple.security.cs.disable-executable-page-protection    [Value] [Bool] true There is no allowance for unsigned libraries, apparently.   I tried this with Splunk v8.2.7 and v9.0.2 on an Intel-based Mac OSx Ventura. Do you have any suggestions?
Looking to see if anyone has successfully migrated UBA nodes from Ubuntu to RHEL? We have several nodes that are EOL for Ubuntu and since Splunk is no longer supporting UBA post Ubuntu 18.  Ideally, ... See more...
Looking to see if anyone has successfully migrated UBA nodes from Ubuntu to RHEL? We have several nodes that are EOL for Ubuntu and since Splunk is no longer supporting UBA post Ubuntu 18.  Ideally, we would be able to migrate the databases in the migration.  I did look at the documentation for migration/upgrades, unfortunately it specifically calls out in the migration tasks the OS's mush be the same. 
I have the following query with multiple joins and using max=0 which is not giving me all results as I think the size becomes more and its unable to take the load.How best can I optimize my query   ... See more...
I have the following query with multiple joins and using max=0 which is not giving me all results as I think the size becomes more and its unable to take the load.How best can I optimize my query     index=syslog (process=*epilog* "*slurm-epilog: START user*") OR (process=*prolog* "*slurm-prolog: END user*") host IN (preo*) timeformat="%Y-%m-%dT%H:%M:%S.%6N" | rex field=_raw "(?<epilog_start>[^ ]+)\-\d{2}\:\d{2}.*slurm\-epilog\:\s*START\suser\=(?<user>[^\s]+)\sjob\=(?<job_id>[^ ]+)" | rex field=_raw "(?<prolog_end>[^ ]+)\-\d{2}\:\d{2}.*slurm\-prolog\:\s*END\suser\=(?<user>[^\s]+)\sjob\=(?<job_id>[^ ]+)" | stats values(epilog_start) as epilog_start values(prolog_end) as prolog_end first(_time) as _time by host user job_id | search prolog_end!="" | search user=* | search job_id=* | eval current_time = now() | join host max=0 type=left [ search index=syslog "Linux version" host IN (preos*) | rex field=_raw "(?<reboot_time>[^ ]+)\-\d{2}\:\d{2}.*kernel:\s.*Linux\sversion" | stats count by reboot_time host ] | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(prolog_end) as job_start | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(epilog_start) as epilog_start | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(reboot_time) as reboot_time | eval diff_reboot = reboot_time - job_start | eval reboot_time = if(diff_reboot<0, "", reboot_time) | eval diff_reboot = if(diff_reboot<0, "", diff_reboot) | eval job_end = if(epilog_start!="",epilog_start,if(diff_reboot>0,reboot_time,current_time)) | eval diff_reboot = if(diff_reboot!="",diff_reboot,10000000000) | sort host user job_id prolog_end diff_reboot | dedup host user job_id prolog_end | join host max=0 type=left [ search index=syslog (("NVRM: Xid*") process=kernel) host IN (preos*) | rex field=_raw "(?<kernel_xid>[^ ]+)\-\d{2}\:\d{2}.*NVRM\:\sXid\s*\(PCI\:(?<PCIe_Bus_Id>[^ ]+)\)\:\s*(?<Error_Code>[^ ]+)\,\spid\=(?<pid>[^ ]+)\,\s*name\=(?<name>[^ ]+)\,\s(?<Log_Message>.*)" | stats count by host kernel_xid PCIe_Bus_Id pid name Error_Code Log_Message | search Error_Code="***" ] | search kernel_xid!="" | convert timeformat="%Y-%m-%dT%H:%M:%S.%6N" mktime(kernel_xid) as xid_time | eval diff = xid_time - job_start | eval diff2 = job_end - xid_time | search diff>0 AND diff2>0 | eval job_start = strftime(job_start,"%Y-%m-%d %H:%M:%S.%3N") | eval job_end = if(job_end==current_time,"N/A",strftime(job_end,"%Y-%m-%d %H:%M:%S.%3N")) | eval xid_time = strftime(xid_time,"%Y-%m-%d %H:%M:%S.%3N") | eval current_time = strftime(current_time,"%Y-%m-%d %H:%M:%S.%3N") | eval reboot_time = strftime(reboot_time,"%Y-%m-%d %H:%M:%S.%3N") | join user job_id type=left [ search index="slurm-jobs" | stats count by job_id job_name user nodelist time_start time_end state submit_line ] | eval _time=strptime(xid_time,"%Y-%m-%d %H:%M:%S.%3N") | stats count by host user job_id job_start job_end xid_time Error_Code PCIe_Bus_Id pid name Log_Message job_name nodelist state submit_line | dedup host     Below are the sample logs Events First search 2022-11-08T14:59:53.134550-08:00 preos slurm-epilog: START user=abc job=62112 2022-11-08T14:58:25.101203-08:00 preos slurm-prolog: END user=abc job=62112 Subsearch after join events: 2022-11-09T12:49:51.395174-08:00 preos kernel: [ 0.000000] Linux version hjhc (buildd@lcy02-amd64-032) (gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 (Ubuntu 5.15.0-52.58-generic 5.15.60) Events from second join: 2022-11-09T12:35:15.422001-08:00 preos kernel: [ 99.166912] NVRM: Xid (PCI:00:2:00): 95, pid='<unknown>', name=<unknown>, Uncontained: FBHUB. RST: Yes, D-RST: No Events from last join: 2022-11-09 20:50:02.000, mod_time="1668027082", job_id="62112", job_name="rfm_nvcp_bek_cd_job", user="abc", account="admin", partition="vi2", qos="all", resv="YES", timelimit_minutes="10", work_dir="/sbatch/logs/2022-11-09T11-30-39/preos/viking-hbm2/builtin/nvcomp_benchmark_cascaded", submit_line="sbatch rfm_nvcomp_benchmark_cascaded_job.sh", time_submit="2022-11-09 12:11:13", time_eligible="2022-11-09 12:11:13", time_start="2022-11-09 12:50:02", time_end="2022-11-09 12:51:22", state="COMPLETED", exit_code="0", nodes_alloc="1", nodelist="preos0093", submit_to_start_time="00:38:49", eligible_to_start_time="00:38:49", start_to_end_time="00:01:20"     Thanks in Advance 
Constant Memory growth with Universal Forwarder with ever increasing channels. Once third party receiver is restarted, UF re-sends lot of duplicate data and frees up channels.
Hi I have challenge that need to know how with splunk, math, statistics, ... able to solve it. Here is the log: sample:   2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   20 2022-10... See more...
Hi I have challenge that need to know how with splunk, math, statistics, ... able to solve it. Here is the log: sample:   2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   20 2022-10-21 13:19:23:120   9999 2022-10-21 13:19:23:120   10 2022-10-21 13:19:23:120   0 2022-10-21 13:19:23:120   40   FYI1:don't want to summerize or get avrage of values. FYI2:between each second contain over thousands of data points. Need to find abnormal in last 24 hours. FYI3:I need optimise solution because there is a lot of data. FYI4:don't want to set threshold e.g. 100 to filter only above that value, because abnormal situation might occur under that value. FYI5:there is no constant value to detect this abnormal e.g check 10 vlue if increase set as abnormal! How can i find 1 incremental or 2 constant rate points of last 24 hours? (show on below chart) Does not need to show this on chart because of lots of data points. Just list them.   Any idea? Thanks 
I have a working search that uses a look up, that is like this: index=MyIndex [| inputlookup MyCSVFile | stats values(email) AS EmailAddress | format] |chart count(Code) as NumCodes ... See more...
I have a working search that uses a look up, that is like this: index=MyIndex [| inputlookup MyCSVFile | stats values(email) AS EmailAddress | format] |chart count(Code) as NumCodes over EmailAddress |sort -NumCodes This works, but there are duplicate codes, so i want the search to count only unique codes per user. I am not sure how to say Count Unique. Thank you for your help!!