All Topics

Top

All Topics

New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. ... See more...
New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. Ran into issue, red warning on 8.2.4 The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=mail, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=5, small buckets=1 and then it would list that last 50 related messages, early this morning it did....but now it says 'None' It happens on indexer #7 of the 8 we have. Crawling the web to gain some understanding. Found this link: Solved: The percentage of small of buckets is very high an... - Splunk Community   The OP had the same issues but talked about timeparsing and they were able to fix. - what is and how to fix timeparsing?   I am not great with search strings, regex, etc...splunk just kind of fell in my lap. I tried to follow the search string that @jacobpevans  wrote up in reply to the post above, not sure if I follow it well. basically its serching for each hot bucket for index _internal sourcetype splunkd, listing those hot buckets that are moving to warm, renaming to index to join, join command then joins each intance to a rollover.  I run his string as is, and get a lot of readout listing many indexes but not index mail as indicated in the warning. Also the readout shows 4 rows (2 indexes on 2 different indexers) with Violation and 100% small buckets.  I would like to resolve this issue but i am seriously lost, haha. I think splunk may be the death of my career even before i get started.  
Hi ,   Some users are seeing this error while running search query "Invalid_adhoc_search_level" - issue is intermittent.  Did someone else faced similar issue . If yes, why is it happening and ... See more...
Hi ,   Some users are seeing this error while running search query "Invalid_adhoc_search_level" - issue is intermittent.  Did someone else faced similar issue . If yes, why is it happening and how to fix it.  Splunk version - 8.2.6  
Hello, I am trying albeit unsuccessfully to add the multiline configuration into helm values file for sck install.  I am unable to get any custom filter to stick at fresh install or upgrade.  Each ... See more...
Hello, I am trying albeit unsuccessfully to add the multiline configuration into helm values file for sck install.  I am unable to get any custom filter to stick at fresh install or upgrade.  Each time the lines that control multiline are wiped out.  Does anyone have an example of injecting this into values.yaml for helm sck install or upgrade? Trying to get multiline logs in below to apply @ install or upgrade time. GitHub - splunk/splunk-connect-for-kubernetes: Helm charts associated with kubernetes plug-ins   Thanks
We have a home grown application that pings Google DNS on a regular basis.  We are ingesting the data from our Meraki wireless devices and I would like to filter out the ICMP messages with the destin... See more...
We have a home grown application that pings Google DNS on a regular basis.  We are ingesting the data from our Meraki wireless devices and I would like to filter out the ICMP messages with the destination of 8.8.8.8.  Our events look like this: 7/8/22 8:14:51.427 AM 2022-07-08 07:14:51.427 xxx.xxx.xxx.xxx 1 Location_XXX flows src=xxx.xxx.0.1 dst=8.8.8.8 mac=70:D3:79:XX:XX:XX protocol=icmp type=8 pattern: allow icmp host = xxx.xx.0.2source = /syslog0/syslog/meraki/xxx.xx.0.2/messages.log sourcetype = meraki What would be the most efficient way to filter these messages to help reduce license usage? 
Hello, I would like to be able to create a serverclass based on our inventory, which is indexed in Splunk. The problem with using wildcard is that our servers don't have a sufficiently detailed nam... See more...
Hello, I would like to be able to create a serverclass based on our inventory, which is indexed in Splunk. The problem with using wildcard is that our servers don't have a sufficiently detailed name to be able to determine the type of database that is running on it. Example : vm-db-1 -> MariaDB vm-db-2 -> PostgreSQL vm-db-3 -> MariaDB vm-db-4 -> Oracle With a Splunk query, I can easily find this information. Is there a solution to my problem? Thanks for your help
There are logs with contents like [{timestamp: xxx, duraton: 5,  url: "/foo1", status: 200}, {timestamp: xxx, duraton: 7,  url: "/foo2", status: 200}, {duraton: 6,  url: "/foo1", status: 200}...] ... See more...
There are logs with contents like [{timestamp: xxx, duraton: 5,  url: "/foo1", status: 200}, {timestamp: xxx, duraton: 7,  url: "/foo2", status: 200}, {duraton: 6,  url: "/foo1", status: 200}...] I'd like stats the throughput and latency with sparkline. Now I can get the avg sparkline, however, if there is a way to get the p50 sparkline, p90 sparkline or so, the avg latency sparkline might not be helpful enough. Sample query is like below.  ...  earliest=-1d@d latest=@d | stats     sparkline(count, 5m) as throughput,     sparkline(avg(duration), 5m) as latency,     count as total_requests,     p50(duration) as duration_p50,     p90(duration) as duration_p90,     p99(duration) as duration_p99
Base query: index=jenkins* teamcenter |search event_tag=job_event |search build_url=*TC_Active* |where isnotnull(job_duration) |rex field=job_name "(?<app>[^\.]*)\/(?<repo>[^\.]*)\/(?<jobname>[^\.]... See more...
Base query: index=jenkins* teamcenter |search event_tag=job_event |search build_url=*TC_Active* |where isnotnull(job_duration) |rex field=job_name "(?<app>[^\.]*)\/(?<repo>[^\.]*)\/(?<jobname>[^\.].*)" |rex field=metadata.GIT_BRANCH_NAME "(?<branch>.*)" |rex field=user "(?<user>[^\.]*)" |search app="*" AND repo="*" AND jobname="*" AND branch="*" AND user="*" | eval string_dur = tostring(round(job_duration), "duration") | eval formatted_dur = replace(string_dur,"(?:(\d+)\+)?0?(\d+):0?(\d+):0?(\d+)","\1d \2h \3m \4s") |rename job_started_at AS DateTime app AS Repository branch AS Branch jobname AS JobName job_result AS Job_Result formatted_dur AS Job_Duration "stages{}.name" AS "Stage View" "stages{}.duration" AS Duration |table DateTime Repository Branch JobName Job_Result Job_Duration "Stage View" Duration Output:  DateTime Repository Branch JobName Job_Result Job_Duration Stage View Duration 2022-07-07T11:47:39Z TeamCenter/TC_Active/TCUA_Builds release/ALM_TC15.5 AMAT_Key_Part_Family_Extraction SUCCESS d 0h 15m 35s Preparation Sonar Analysis Build Save Artifacts 108.817 419.698 15.819 376.698 2022-07-07T17:14:49Z TeamCenter/TC_Active/Portal release/ALM_TC15.5 com.amat.rac SUCCESS d 0h 25m 49s Preparation Sonar Analysis Build Save Artifacts 105.014 1309.388 29.486 101.647   Need to add another column in output "stage_duration" which will convert "Duration" field value in  "Day Hr Min Sec" format. 
host="SPL-SH-DC" sourcetype="ABCSW"......| search "Plugin Name" != "TLS Version 1.1 Protocol Deprecated" AND Port != "8443" AND Port != "8444" | table "IP Address",Host_Name,"Plugin Name",Severity,P... See more...
host="SPL-SH-DC" sourcetype="ABCSW"......| search "Plugin Name" != "TLS Version 1.1 Protocol Deprecated" AND Port != "8443" AND Port != "8444" | table "IP Address",Host_Name,"Plugin Name",Severity,Protocol,Port,Exploit,System_Type,Synopsis,Description,Solution,"See Also","CVSS V2 Base Score",CVE,Plugin,status,Pending_since,source Hi Splunker, Could you please help.. I have a query as I have put above . However,  I want a result query with filter Field " Plugin Name " not equal "TLS Version 1.1 Protocol Deprecated" but base on Field "Port" equal  "8443" and " 8444". I will be appreciate for your help. 
Hello Splunkers, Splunk crashes on our Linux core servers (Master, Search Heads and Heavy Forwarders, indexers are not affected) with "ENGINE: Bus STOPPED" errors.  When I check splunkd status on ... See more...
Hello Splunkers, Splunk crashes on our Linux core servers (Master, Search Heads and Heavy Forwarders, indexers are not affected) with "ENGINE: Bus STOPPED" errors.  When I check splunkd status on host, it turns out that: se1234@z1il1234:~> /opt/splunk/bin/splunk status splunkd 130820 was not running. Stopping splunk helpers...Done. Stopped helpers. Removing stale pid file... done. It is an intermittent issue. Today in the morning we had this issue on SH1, yesterday on another SH2 and a week before on SH4. We have checked Resource usage and there is nothing shown in DMC that may cause splunkd to crash (e.g. high CPU usage) Any idea what is worth checking or how to troubleshoot? Greetings, Dzasta
Hi everyone,   basically I am trying to count how many unique customers I had in a period and that worked well with dc(clientid). Until I wanted to know "How many unique Customers do I have by ... See more...
Hi everyone,   basically I am trying to count how many unique customers I had in a period and that worked well with dc(clientid). Until I wanted to know "How many unique Customers do I have by product group"           |stats dc(clientid) as Customers by year quarter product_toplevel product_lowerlevel           which, when for example looking at only 1 specific client who purchased 3 products within one group can return entries like year quarter product_toplevel product_lowerlevel Customers 2022 1 Group A Article 1 1 2022 1  Group A Article 2 1 2022 1  Group A  Article 3 1   Now Splunk is not wrong here. Looking at each product there is 1 unique customer. However if I were to sum up the Customers-column it would look like I have 3 Customers in total. I would much rather have it return the value "0.3" for "Customers" in the above example so that I can export the table to excel and work with pivot-tables while retaining the 'correct' total of unique customers. For that purpose I intend to use eventstats to create a sort of column-total and then divide the value of Customers by that column total. But I can not figure out how to do it. Any ideas?
Hello, I would like to request an improvement in the official unix TA : adding a fields.conf with the following content : [action] INDEXED = false INDEXED_VALUE = false Indeed for some events... See more...
Hello, I would like to request an improvement in the official unix TA : adding a fields.conf with the following content : [action] INDEXED = false INDEXED_VALUE = false Indeed for some events, the value filled in the field action doesn't exist in the indexed event, so the search can't find the events. See the following topic  Example : - command launched : useradd splunky - event received in Splunk (syslog) : Jul  8 07:21:09 host useradd[4450]: new user: name=splunky, UID=1001, GID=1001, home=/home/splunky, shell=/bin/bash - applied props.conf : REPORT-account_management_for_syslog = useradd, [...] - applied transforms.conf :  ## Account Management [useradd] REGEX = (useradd).*?(?:new (?:user|account))(?:: | (?:added) - )(?:name|account)=([^\,]+),(?:\s)(?:(?:UID|uid)=(\w+),)?(?:\s)(?:(?:GID|gid)=(\w+),)?(?:\s)*(?:home=((?:\/[^\/ ]*)+\/?),)?(?:.*uid=(\d+))? FORMAT = vendor_action::"added" action::"created" command::$1 object_category::"user" user::$2 change_type::"AAA" object_id::$3 object_path::$5 status::"success" object_attrs::$4 src_user_id::$6 - Splunk query : index=[...] action="created"  --> No result. With the proposed fields.conf, the event is found and displayed in the results
HI, I want to disable multiple alerts/reports using curl (TA-webtools)..so basically my results look like below- title app id report1  app1 https://abc.com:8089/servicesNS/nob... See more...
HI, I want to disable multiple alerts/reports using curl (TA-webtools)..so basically my results look like below- title app id report1  app1 https://abc.com:8089/servicesNS/nobody/app1/saved/searches/report1 report2  app2 https://abc.com:8089/servicesNS/nobody/app2/saved/searches/report2 report3  app3 https://abc.com:8089/servicesNS/nobody/app3/saved/searches/report3   How I can disable all id alert/reports in single query? any help is appreciated! @jkat54 
Could not load JSON from CEF parameter: Error Code: Error code unavailable. Error Message: Expecting ',' delimiter: line 5 column 1 (char 97). It is failed action for phantom-asset. In phantom asset... See more...
Could not load JSON from CEF parameter: Error Code: Error code unavailable. Error Message: Expecting ',' delimiter: line 5 column 1 (char 97). It is failed action for phantom-asset. In phantom asset config we don't have delimiter block. Could you anyone please help me with it I'm new to Phantom.
Hi, I have a set up where an UF is sending data into HF. From HF, the data is supposedly to be sent to two different indexers with different indexes. For example, indexer01 receives the data with ind... See more...
Hi, I have a set up where an UF is sending data into HF. From HF, the data is supposedly to be sent to two different indexers with different indexes. For example, indexer01 receives the data with indexA while indexer02 receives the same data with indexB.  This is what I have tried so far, but not working.  However the data flow is correct and sending the same data to both indexers with the predefined indexA from UF. inputs.conf (UF) [monitor:///home/name/samplelogs] disabled = false index = indexA sourcetype = sourcetypeA inputs.conf (HF) [splunktcp://9997] outputs.conf (HF) [tcpout] defaultGroup = indexer01, indexer02 [tcpout:indexer01] server=indexer01_IP [tcpout:indexer02] server=indexer02_IP inputs.conf (indexer02) [splunktcp://9997] index=indexA queue=parsingQueue props.conf (indexer02) [sourcetypeA] (or) [host::UF_hostname] (or) [source::/home/name/samplelogs] TRANSFORMS-index = overrideindex transforms.conf (indexer02) [overrideindex] DEST_KEY =_MetaData:Index REGEX = . FORMAT = indexB  Any help would be appreciated!  Thanks!
Is MongoDB compacting of indexes to save space after data is deleted a built-in option in Splunk 9?  Previous posts indicated it was not possible, and it appeared the reason was due to WiredTiger not... See more...
Is MongoDB compacting of indexes to save space after data is deleted a built-in option in Splunk 9?  Previous posts indicated it was not possible, and it appeared the reason was due to WiredTiger not being the standard storage engine under the hood.  Now that WiredTiger is required in Splunk 9, is support for 'compact' a standard feature in Splunk 9? Need an elegant/supported way to free up disk space after deleting data. When you realize that something has flooded your indexes and filled up disk space, what's the supported option to get the individual events out of the index and reclaim the disk space without losing the rest of the data in the index that is wanted?   emphasis on reclaiming/freeing the disk space.  Is the only option to let the buckets age out to frozen? Thanks.
I have a query that must search 9 weeks of data, and then applies a filter against a single field (dv_opened_at) looking for specific events that occurred within an 8 week period.  Initial 9 week sea... See more...
I have a query that must search 9 weeks of data, and then applies a filter against a single field (dv_opened_at) looking for specific events that occurred within an 8 week period.  Initial 9 week search is necessary to catch events that were modified after the end of the last week, yet had a dv_modified_at time within the last 8 weeks.  query     index=cmdb (dv_number=* OR number=*) dv_state=* dv_assigned_to[| inputlookup cmdb_users.csv| table dv_assigned_to ] earliest=-8w@w latest=now() | table _time number dv_number dv_opened_at dv_assigned_to dv_short_description dv_watch_list dv_sys_updated_on dv_state close_notes | dedup number | eval dv_opened_at=strptime(dv_opened_at,"%Y-%m-%d %H:%M:%S") | where dv_opened_at>=relative_time(now(), "-8w@w") AND dv_opened_at<=relative_time(now(), "@w") | eval _time=dv_opened_at | bin _time span=1w | eval weeknumber=strftime(_time,"%U") | rename dv_assigned_to AS Analyst | timechart limit=0 useother=false span=1w count BY Analyst     The problem is, the timechart outputs 9 weeks of data, and as expected the last week is all 0's.  How do I eliminate the current week from the output, but keep the current week in the initial query? Output     _time Analyst1 Analyst2 Analyst3 Analyst4 2022-05-08 19 6 0 0 2022-05-15 5 4 0 0 2022-05-22 8 2 0 1 2022-05-29 7 4 0 0 2022-06-05 1 3 1 39 2022-06-12 7 1 4 51 2022-06-19 3 2 0 59 2022-06-26 25 5 2 26 2022-07-03 0 0 0 0 #how to drop this row each weekly report    
Our login page is developed by team1 and the main home page (After login) is developed by team2. The event logs from each use completely different structures. I strongly suspect unique system identif... See more...
Our login page is developed by team1 and the main home page (After login) is developed by team2. The event logs from each use completely different structures. I strongly suspect unique system identifiers in the login logs may be carried into the home page logs, but I don't know which fields (out of 20-50 fields in each log) may contain similar values.  Is there a method to find fields that have the same value in both sources if I don't know which fields to match on?  (index=A sourcetype="login" colA="apple", colB="ABC123" , colC="purple") (index=B sourcetype="home" field1="yellow", field2="orange", ..., field20="ABC123", field21="Monkey") How can I search both sources to identify ( login.colB == home.field20) if I don't know in advance those fields match? I may not find ANY common values...
We have a Syslog server collecting data from Meraki Wireless devices.  There is a UF installed on the Syslog server sending data to Splunk.  I have been trying to use Blacklist to filter out the ICMP... See more...
We have a Syslog server collecting data from Meraki Wireless devices.  There is a UF installed on the Syslog server sending data to Splunk.  I have been trying to use Blacklist to filter out the ICMP protocol events which we don't need and I have been unable to drop them.  The entry in my inputs.conf file for this are: [monitor:///syslog0/syslog/meraki/*/*.log] disabled=0 host_segment = 4 blacklist1 = protocol=icmp blacklist2 = "(?192.\168.\30.\143.)" blacklist3 = 10.\12.\239.\7 index = network sourcetype = meraki I have tried a number of variations and have been unable to get the "protocol=icmp" to drop.  Is there something obvious that I am missing? Thanks in advance for any suggestions.
| eval RouteLatency = if (Name="ABC" AND HTTP="*https://.net.*.com*" , bckLatency ,RouteLatency )
Good afternoon. We currently have our Splunk cloud receiving logs from firewall, office 365 and azure. And now we want to send windows logs, but when installing the universal forwarder on the windows... See more...
Good afternoon. We currently have our Splunk cloud receiving logs from firewall, office 365 and azure. And now we want to send windows logs, but when installing the universal forwarder on the windows machine, we are not able to view the logs in the splunk cloud. We can see that the collector is receiving the logs, but we don't see it in the splunk cloud. According to the images, the host is correct, but we are not receiving the correct index logs. Could someone please help?