All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is it possible to automate the dashboard code management and deployment using GitLab ?
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panel... See more...
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panels. They are not using any base search and summary index in any of the dashboard panels. Sometime in one dashboard they are using 60+ panels with all index searches. Could any help me to provide all the consequences will happen in this scenerio?   Thanks in advance
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                  ... See more...
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                   vendor2              product2         version2 host3                   vendor3              product3         version3 host4                   vendor4              product4         version4 Table2 device.hostname        device.username HOST1                                user1 HOST2                                user2 HOST3                                user3 HOST4                                user4 The table that I want to generate from these two is the following: Table3 hosts        username      vendors              products          versions host1                 user1              vendor1              product1         version1 host2                 user2              vendor2              product3         version4 host3                 user3              vendor3              product3         version3 host4                 user4              vendor4              product4         version4   The search I tried was the following:   (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = coalesce(hostnames, hostname) | table hosts, username, vendors, products, versions   The result was the following: hosts        username      vendors              products          versions host1                                  vendor1              product1         version1 host2                                  vendor2              product3         version4 host3                                  vendor3              product3         version3 host4                                  vendor4              product4         version4 HOST1      user1 HOST2      user2 HOST3      user3 HOST4      user4 host1 and HOST1 both reference the same hostname, just one index had the letters capitalized and the other did not. Does anyone have any ideas?
 
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: P... See more...
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: Parameter 'subscription_id' must not be None. You can only add the "subscription id" in the GUI while creating a new input and when you save the config it immediately blanks out that field.... I wouldn't assume you'd need a subscription id to pull data for all subscriptions in a tenant anyways … Either way it's busted. Has anyone gotten the "Subscriptions" input to work successfully? Does anyone know if this is a known bug? Splunk Add-on for Microsoft Cloud Services 
If I run this: index=main | rex field=_raw mode=sed "s/(\D)\d{3}-?\d{2}-?\d{4}(\D)/\1XXXXXXXXX\2/g" I get all of the results back, but the SSN's are still in clear text (not redacted)
Thanks @Ryan.Paredez  for the update. I have one more query regarding this, We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD. ... See more...
Thanks @Ryan.Paredez  for the update. I have one more query regarding this, We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD. Regads Fadil
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia... See more...
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia. Here is what I received from my customer Marine Atlantic: For compliance: we deal with IMO, SOLAS Chapter 9, sometimes referred to as ISM code. This is for existing vessels.   The new vessel is DNV SP1 (Security Profile 1 or Cyber Secure Essential) compliant (some SP3 systems). You may note, SP0 is IMO...   We also use ITSG-33 internally which is the Canadian version of NIST 800.53 (but not maintained as well - NIST has added some cloud based controls for example).   Does the Splunk Compliance Ap cover these requirements??   please let me know thank Alli. RSM PBST Canada.
Hi was your SF=RF before you start migration? Was there any issues (e.g. some node crashed/stopped) during migration? What you could found from _internal log? There should be some mentions in logs ... See more...
Hi was your SF=RF before you start migration? Was there any issues (e.g. some node crashed/stopped) during migration? What you could found from _internal log? There should be some mentions in logs for the reason. r. Ismo
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime... See more...
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime) as MinTimeTaken, max(log.responseTime) as MaxTimeTaken count by log.operationName   My results look like this: _time   AvgTimeTaken: NULL MaxTimeTaken: NULL MinTimeTaken: NULL count:query_name count: NULL   count:query_name 2024-03-18 13:00:00       0 0 0   I want to understand what the :NULL means, and also how I can get the query to display all values.  Secondly, the count is getting displayed for query_name that is similar to the query_name in my query string. I wanted to get an exact match on the query_name. Can someone please help me with this? Thanks!
Yes, it auto selects "CSV" during import but I have also manually selected CSV to see if there was a bug their. 
Hi if you have this information on your logs which have ingested into splunk then you can query this information. What logs you have? What platforms those contains? What you have already tried? ... See more...
Hi if you have this information on your logs which have ingested into splunk then you can query this information. What logs you have? What platforms those contains? What you have already tried? r. Ismo
Expanded my search time results to 60 minutes as 24 hours or 30 days produced over a million events. The 60 min search producing hundreds of thousands.  Can you review this rule to see if there i... See more...
Expanded my search time results to 60 minutes as 24 hours or 30 days produced over a million events. The 60 min search producing hundreds of thousands.  Can you review this rule to see if there is anything within the SPL code that is incorrect? It should only product less than hundred . . . if even that.
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is ... See more...
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is subject to attack and my query, which i use to look for legitimate users being blocked, will be inundated by various IPs generating 100s of events. How can i table fields so i can see the data i want per event, but also filter out a field if that fields event count is greater than a value? Simple example is an IP is seen from a facility once for a block in the last 15 minutes. Another IP, was seen 400 times as part of a scan. I want to see the 1 (or even 10) events by a specific source IP, but not the 400 from another. I know i can block all of the IP, or part by a wildcard, but that gets messy and can lead to too many IPs in a NOT statement. Current table info to my query table _time, event_id, hostname, violation, policy, uri, ip_client | sort - _time Adding a stats count by ip_client only shows the count and ip, losing the other data and the event IDs will always be different, so the count will never be higher than 1. It would be nice if i could do something like "| where count ip_client<=10" to remove any source IPs that show up more than 10 times in the results.
Hi I’m not sure, but my expectation is that as Centos 8 isn’t supported (https://docs.splunk.com/Documentation/SOARonprem/6.2.0/Install/InstallUnprivileged), it cannot parse correctly content of this... See more...
Hi I’m not sure, but my expectation is that as Centos 8 isn’t supported (https://docs.splunk.com/Documentation/SOARonprem/6.2.0/Install/InstallUnprivileged), it cannot parse correctly content of this file. r. Ismo
This Is valid method to do it. Have you select correct sourcetype csv when you are uploading it?
Hi @srseceng , OK, Add data of the same Indexer I suppose. In this case the issue is to search in the regex: what does it happen running the sed regex in the Splunk ui? Are you sure about the sour... See more...
Hi @srseceng , OK, Add data of the same Indexer I suppose. In this case the issue is to search in the regex: what does it happen running the sed regex in the Splunk ui? Are you sure about the sourcetype? Did you restarted Splunk after props.conf update? Sorry for the stupid questions, but "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth" (Sir Artur Conan Doyle)! Ciao. Giuseppe
Use the relative_time function to calculate time offsets. | eval new_time = relative_time(now(), "+1d@d+10h") The format string breaks down as follows: "+1" : this time tomorrow "@d": round off t... See more...
Use the relative_time function to calculate time offsets. | eval new_time = relative_time(now(), "+1d@d+10h") The format string breaks down as follows: "+1" : this time tomorrow "@d": round off the time to 0:00 "+10h": add ten hours  
Because this is a test environment, the logs are being added through the UI's "Add Data" > "Upload" feature. I have a CSV file that contains the logs.  Is this a valid test method?
@cmg I do have to ask why you are doing it this way? The app framework removes all of this necessity     As my old mentor said "Use the platform Luke...erm Tom"   Why not just use the HTTP act... See more...
@cmg I do have to ask why you are doing it this way? The app framework removes all of this necessity     As my old mentor said "Use the platform Luke...erm Tom"   Why not just use the HTTP action and then select the data returned by using the relevant datapath downstream. The HTTP app doesn't show you all the returned fields in the playbook datapaths as the dev couldn't know all returned so stopped at "response_body" or "parsed_response_body". You have to write the path to the returned data yourself.   Best way is to run the action, select it in the activity pane of the container, find the value you want in the JSON presented and click on the key in the window. There should be a datapath -type thing at the top. 0 = * and > = . in the datapath you put in the playbook.  -- Hope this helped! If it solved your issue please mark as a solution for future questions on the same thing. Happy SOARing! --