All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try this: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = lower(hosts) | stats... See more...
Try this: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = lower(hosts) | stats values(*) as * by hosts | table hosts, username, vendors, products, versions  
1. Pls whats the best way to monitor kvstore? 2. What is the best way to monitor errors from kvstore migration 
Hello, How to search based on drop-down condition? Thank you in advance! index = test | eval week_or_day_token = "w" (Drop down: if select "week" = "w", "day" = "d) | eval day_i... See more...
Hello, How to search based on drop-down condition? Thank you in advance! index = test | eval week_or_day_token = "w" (Drop down: if select "week" = "w", "day" = "d) | eval day_in_week_token = 1 (Drop down: if select 0=Sunday, 1=Monday, 2=Tuesday, and so on) If  week_or_day_token  is "week", then use day_in_week_token, otherwise if  week_or_day_token is "day" , then use all day * | eval day_in_week = if(week_or_day_token="w", day_in_week_token, "*") Get what day number in week on each timestamp | eval day_no_each_timestamp = strftime(_time, "%" + day_in_week_token) I searched the timestamp that falls on Monday (day_in_week=1), but I got 0 events | search day_no_each_timestamp = day_in_week If I replaced it with "1", it worked, although the value day_in_week is 1 | search day_no_each_timestamp = "1"
Is it possible to automate the dashboard code management and deployment using GitLab ?
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panel... See more...
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panels. They are not using any base search and summary index in any of the dashboard panels. Sometime in one dashboard they are using 60+ panels with all index searches. Could any help me to provide all the consequences will happen in this scenerio?   Thanks in advance
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                  ... See more...
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                   vendor2              product2         version2 host3                   vendor3              product3         version3 host4                   vendor4              product4         version4 Table2 device.hostname        device.username HOST1                                user1 HOST2                                user2 HOST3                                user3 HOST4                                user4 The table that I want to generate from these two is the following: Table3 hosts        username      vendors              products          versions host1                 user1              vendor1              product1         version1 host2                 user2              vendor2              product3         version4 host3                 user3              vendor3              product3         version3 host4                 user4              vendor4              product4         version4   The search I tried was the following:   (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = coalesce(hostnames, hostname) | table hosts, username, vendors, products, versions   The result was the following: hosts        username      vendors              products          versions host1                                  vendor1              product1         version1 host2                                  vendor2              product3         version4 host3                                  vendor3              product3         version3 host4                                  vendor4              product4         version4 HOST1      user1 HOST2      user2 HOST3      user3 HOST4      user4 host1 and HOST1 both reference the same hostname, just one index had the letters capitalized and the other did not. Does anyone have any ideas?
 
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: P... See more...
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: Parameter 'subscription_id' must not be None. You can only add the "subscription id" in the GUI while creating a new input and when you save the config it immediately blanks out that field.... I wouldn't assume you'd need a subscription id to pull data for all subscriptions in a tenant anyways … Either way it's busted. Has anyone gotten the "Subscriptions" input to work successfully? Does anyone know if this is a known bug? Splunk Add-on for Microsoft Cloud Services 
If I run this: index=main | rex field=_raw mode=sed "s/(\D)\d{3}-?\d{2}-?\d{4}(\D)/\1XXXXXXXXX\2/g" I get all of the results back, but the SSN's are still in clear text (not redacted)
Thanks @Ryan.Paredez  for the update. I have one more query regarding this, We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD. ... See more...
Thanks @Ryan.Paredez  for the update. I have one more query regarding this, We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD. Regads Fadil
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia... See more...
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia. Here is what I received from my customer Marine Atlantic: For compliance: we deal with IMO, SOLAS Chapter 9, sometimes referred to as ISM code. This is for existing vessels.   The new vessel is DNV SP1 (Security Profile 1 or Cyber Secure Essential) compliant (some SP3 systems). You may note, SP0 is IMO...   We also use ITSG-33 internally which is the Canadian version of NIST 800.53 (but not maintained as well - NIST has added some cloud based controls for example).   Does the Splunk Compliance Ap cover these requirements??   please let me know thank Alli. RSM PBST Canada.
Hi was your SF=RF before you start migration? Was there any issues (e.g. some node crashed/stopped) during migration? What you could found from _internal log? There should be some mentions in logs ... See more...
Hi was your SF=RF before you start migration? Was there any issues (e.g. some node crashed/stopped) during migration? What you could found from _internal log? There should be some mentions in logs for the reason. r. Ismo
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime... See more...
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime) as MinTimeTaken, max(log.responseTime) as MaxTimeTaken count by log.operationName   My results look like this: _time   AvgTimeTaken: NULL MaxTimeTaken: NULL MinTimeTaken: NULL count:query_name count: NULL   count:query_name 2024-03-18 13:00:00       0 0 0   I want to understand what the :NULL means, and also how I can get the query to display all values.  Secondly, the count is getting displayed for query_name that is similar to the query_name in my query string. I wanted to get an exact match on the query_name. Can someone please help me with this? Thanks!
Yes, it auto selects "CSV" during import but I have also manually selected CSV to see if there was a bug their. 
Hi if you have this information on your logs which have ingested into splunk then you can query this information. What logs you have? What platforms those contains? What you have already tried? ... See more...
Hi if you have this information on your logs which have ingested into splunk then you can query this information. What logs you have? What platforms those contains? What you have already tried? r. Ismo
Expanded my search time results to 60 minutes as 24 hours or 30 days produced over a million events. The 60 min search producing hundreds of thousands.  Can you review this rule to see if there i... See more...
Expanded my search time results to 60 minutes as 24 hours or 30 days produced over a million events. The 60 min search producing hundreds of thousands.  Can you review this rule to see if there is anything within the SPL code that is incorrect? It should only product less than hundred . . . if even that.
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is ... See more...
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is subject to attack and my query, which i use to look for legitimate users being blocked, will be inundated by various IPs generating 100s of events. How can i table fields so i can see the data i want per event, but also filter out a field if that fields event count is greater than a value? Simple example is an IP is seen from a facility once for a block in the last 15 minutes. Another IP, was seen 400 times as part of a scan. I want to see the 1 (or even 10) events by a specific source IP, but not the 400 from another. I know i can block all of the IP, or part by a wildcard, but that gets messy and can lead to too many IPs in a NOT statement. Current table info to my query table _time, event_id, hostname, violation, policy, uri, ip_client | sort - _time Adding a stats count by ip_client only shows the count and ip, losing the other data and the event IDs will always be different, so the count will never be higher than 1. It would be nice if i could do something like "| where count ip_client<=10" to remove any source IPs that show up more than 10 times in the results.
Hi I’m not sure, but my expectation is that as Centos 8 isn’t supported (https://docs.splunk.com/Documentation/SOARonprem/6.2.0/Install/InstallUnprivileged), it cannot parse correctly content of this... See more...
Hi I’m not sure, but my expectation is that as Centos 8 isn’t supported (https://docs.splunk.com/Documentation/SOARonprem/6.2.0/Install/InstallUnprivileged), it cannot parse correctly content of this file. r. Ismo
This Is valid method to do it. Have you select correct sourcetype csv when you are uploading it?
Hi @srseceng , OK, Add data of the same Indexer I suppose. In this case the issue is to search in the regex: what does it happen running the sed regex in the Splunk ui? Are you sure about the sour... See more...
Hi @srseceng , OK, Add data of the same Indexer I suppose. In this case the issue is to search in the regex: what does it happen running the sed regex in the Splunk ui? Are you sure about the sourcetype? Did you restarted Splunk after props.conf update? Sorry for the stupid questions, but "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth" (Sir Artur Conan Doyle)! Ciao. Giuseppe