All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

according to regex101 my regex is correct, so the problem must be in the FORMAT
that didn't work either
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some inform... See more...
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some information that is CJIS related and needs to be in AWS or Azure Gov.   Thanks
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has b... See more...
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has been specifically allowed.   | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where NOT All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8, 10.0.0.0/8) AND All_Traffic.action=allow* by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.dest_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.protocol=icmp by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51
thank you. It worked. 
did you ever resolve this issue? 
Hi (again!), YES!  Your searched worked and I get it!  The long form way you wrote it with the "AND" condition is exactly why it's excluding and what I meant. I suppose I didn't think to put th... See more...
Hi (again!), YES!  Your searched worked and I get it!  The long form way you wrote it with the "AND" condition is exactly why it's excluding and what I meant. I suppose I didn't think to put the 'bracket subsearch' INSIDE the parenthetic OR statement and this dramatically reduces the hits to (a) terminated + (b) the | format results. Thank you!  
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed... See more...
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed cloud vetting just last Friday.  What's going on is in the next version of Splunk Cloud (tentatively Feb/March 2024 or so) there's a change happening around how distributed search works. Unfortunately that change with combined with how I'm doing checking of capabilities in existing versions means if you're using any older version of sa-littlehelper on this new version of Splunk Cloud the `| btool` command will work for your search head, but will not return any results from your indexers (search peers), and instead give error messages about needing to have the correct capability.  This v1.2.0 release fixes that issue on the upcoming SplunkCloud version as well as still work on all current supported versions of Splunk Enterprise & Splunk Cloud too, so I wanted to get the word out that you should update and save your future self some headache (with what I view to be core fuctionality) I've also posted variations of this notice on the #splunk_cloud channel on splunk-usergroups, the Splunk subreddit, and GoSplunk discord If there are other places you think I should let me know As mentioned on the contact page on Splunkbase While this app is not formally supported, the developer (me) can be reached at teddybear@splunk.com OR in splunk-usergroups slack, @teddybfez. Responses are made on a best effort basis. Feedback is always welcome and appreciated! Learn more about splunk-usergroups slack Admins Little Helper for Splunk 
Does this query return the events with disposition in addition to events with the specific sourcenumber? index="policyguru_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR [ | in... See more...
Does this query return the events with disposition in addition to events with the specific sourcenumber? index="policyguru_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR [ | inputlookup pg_directory_listings.csv | search lists="*mylist*" | fields number | rename number as sourcenumber | format])   I think the last search you shared was actually skipping over the disposition events because of how the subsearch was formatted into the parent search. An expanded version of your last search I believe would look like this. index="policyguru_data" resourceId="enum*" AND ("disposition.disposition"="TERMINATED" OR sourcenumber=*) AND ( ( sourcenumber="<val_1>" ) OR ( sourcenumber="<val_2>" ) OR ... OR ( sourcenumber="<val_n>" ) ) where val_1, val_2, ..., val_n are the sourcenumbers from the lookup that you are trying to filter on.
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come ... See more...
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come back (the first line is the most important). For a lot of the reports I'll need to write, the way to do this would be the match DIRECTORY INFORMATION but that DOES NOT EXIST IN THE UNDERLYING DATA and this gets complicated with what I wrote in that other post about (2) streams of data. Here is what I mean (specifics). 1. DS 1 (call data, JSON) 2. DS 2 (policy data, JSON) 3. directory.csv (inputlookup file with data, or I could query a DB using dbxquery) So if I want to match 'mylist' in that csv then I have to do it AFTER the first line, like this:   index="my_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR "connections{}.left.facets{}.number"=*) | stats values(sourcenumber) as sourcenumber values(disposition) as disposition by guid | lookup directory_listings.csv number AS sourcenumber OUTPUT lists | search lists="mylist"   This brings back the (2) Datasources (the first line), but then I have to read through 100% of it, then match the directory, then filter so this is huge 'false positive' (event to scan count ratio) I've read before about using subsearch and this works great, but then leaves out one of the data sources. In other words this: index="policyguru_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR sourcenumber=*) [ | inputlookup pg_directory_listings.csv | search lists="*mylist*" | fields number | rename number as sourcenumber | format ] | table * runs fast and is 1:1 event-to-scan, BUT OMITS disposition entirely, because it's not 'joining' data, but sending the sourcenumber up to the first line, which then EXCLUDES disposition because it doesn't match. Does that make sense? I suppose I could use this entire search AS a subsearch to get back 'guid' values and then pass that UP into another search but feels very...INCEPTION at that point!  Anyway, looking for ideas. Thank you!
Make sure WLM is enabled and that there are no other rules with a higher priority that prevent this rule from executing.
@richgalloway  Sorry I got confused. I'll say my exact requirements . In my location field there some locations AB AC AD AF and so on. I want new one which is AM in location field where AM indic... See more...
@richgalloway  Sorry I got confused. I'll say my exact requirements . In my location field there some locations AB AC AD AF and so on. I want new one which is AM in location field where AM indicates the addition of (AB AC AD AF).  I want to display both AB AC AD AF and AM in the location field. Don't consider this  AA01,10,5  I tried something like this |eval row=AM05 |table row location where it will show AM05 for all fields. But i want only for addition of (AB AC AD AF) which is AM05 . without replacing existing ones that is (AB AC AD AF).
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there somet... See more...
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there something wrong with my rule? (search_type=adhoc) AND NOT (role=sc_admin) AND runtime>3m Remember that I did a lab and the same rule worked. Splunk Instance version: 9.0.2305.201 Laboratory: 9.1.2308.102 Can you help me please.
There are more than 1 million events indexed with the same timestamp - February 9, 2023 15:50:50 UTC. Double-check the inputs.conf and props.conf settings to ensure events are being onboarded correc... See more...
There are more than 1 million events indexed with the same timestamp - February 9, 2023 15:50:50 UTC. Double-check the inputs.conf and props.conf settings to ensure events are being onboarded correctly. Searching this data will be a challenge, if it can be done at all.  Add index, source, sourcetype, and host fields to the base query to narrow the scope of the search as much as possible.
Restating the requirements does not explain them. @Muthu_Vinith wrote: For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location nam... See more...
Restating the requirements does not explain them. @Muthu_Vinith wrote: For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. You have that.  See the following example query   | makeresults format=csv data="location,cap,login AA01,10,5 AB02,6,0 AC03,10,0" | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location="AM05"] | table location cap login @Muthu_Vinith wrote: When searching for AM05, I want to see the added values, and when searching for AB, it should display the existing value !! The AM05 location doesn't exist until this search runs.  Therefore, you can't search for AM05.  
@PickleRick , What should be the approach taken for data landing in Cloud through modular inputs and not from any UF/HF to export out?
Can you use a drilldown to set a token and use that token in a URL? If I understand the OP correctly, yes. Here is what I did today and found this post trying to solve this problem. I have a studio... See more...
Can you use a drilldown to set a token and use that token in a URL? If I understand the OP correctly, yes. Here is what I did today and found this post trying to solve this problem. I have a studio dashboard that lists my installed apps in a table. An app may have a hyperlink and I want to navigate to the link from the dashboard. (I simplified this search from what is shown in the image, eliminating inputs, etc.) | rest splunk_server=local /services/apps/local | rename attribution_link as URL | table title label description author version build URL | eval URL=replace(URL,"https|http\:\/\/","") I can now click the URL column and navigate in a new tab to the URL. The URL is set by a token, then another drilldown opens the link. I didn't see a way in the GUI to add two token types, so I used copy/paste to add a 2nd drilldown code segment. The first segment for drilldown one (drilldown.setToken) is in blue, and the 2nd segment (drilldown.customUrl) is in red. lines 22 - 41: "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "url", "key": "row.URL.value" } ] } }, { "type": "drilldown.customUrl", "options": { "url": "https://$url$", "newTab": true } } ], At first it didn't work. But then I found the drilldown option was set to "false." After setting that to "true" it worked as intended. Hurray! line 5: "drilldown": "true",   I hope that helps. Happy Splunking! 
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were di... See more...
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were disabled but still they are firing from same one SH cluster members and from other 2 not(which is expected). Raised multiple vendor cases but no help. Can someone help 
This is a fundamental problem with the data badly ingested into Splunk. Splunk returns results in reverse chronological order so it needs to be able to sort the results properly based on the _origin... See more...
This is a fundamental problem with the data badly ingested into Splunk. Splunk returns results in reverse chronological order so it needs to be able to sort the results properly based on the _original_ value of the _time field. (afterwards the _time field can be rewritten during the search pipeline and it won't affect the result order). If you have several hundred thousand events indexed at the same point in time, Splunk cannot sort them due to memory constraints. It's not a problem with the search as such but it's a problem with the data - fix your data onboarding.