All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=no... See more...
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=now | stats count by user _time | lookup AD_enrich.csv user OUTPUTNEW userPrincipalName AS Mail, displayName AS FullName, wwwHomePage AS ComputerName, mobile AS Mobile, description AS Department, ManagerName, ManagerLastName | table _time, Users, FullName, Mail, Mobile, ComputerName, Department, ManagerName, ManagerLastName, count
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Ap... See more...
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype   but if I use the lookup command, it runs:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype | lookup my_lookup.csv sourcetype OUTPUT Application   So the lookup is correct. When I try to add the field it's possible to add it but it's still always empty: Does anyone experienced this behavior and found a workaround? Ciao. Giuseppe
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade st... See more...
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade steps on HF ? Also please suggest solution to fix the error we are facing after untar the installation file and starti g service with accept license.   couldn't run "splunk" migrate: No such file or directory ERROR while running rename-cluster-app migration
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add u... See more...
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add users, add forwarders, and have updated to newer version, but I really don't know how to use it. Admin newby. We are running splunk enterprise v 8.2.6, on a single RHEL6 server. We need to get off of RHEL6, so plan was to migrate the splunk install to a new RHEL8 server, and then upgrade to newest splunk version. My understanding of splunk is that it is pretty self contained - to update the version, you just overwrite the /opt/splunk dir with the new splunk tar file. Our data is held in a separate filesystem, /data/splunk dir. So, the process was: 1- install splunk v 8.2.6 on the new rhel8 server, and verify it starts and works 2- shutdown old rhel6 splunk 3- copy the old rhel6 /opt/splunk dir on top of the new rhel8 /opt/splunk dir 4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir 5- shutdown the rhel6 splunk server 6- ensure all the networking, DNS, etc is resolving to the new rhel8 server 7- start up splunk on the new rhel8 server The process was followed by me this morning, and appears to have worked. I am seeing forwarders (there are 160) check in on the new server, I can run searches on host=X and see that X has been in contact. But there is one thing I am seeing, that I don't know if it is a problem or not. If I look at "Indexes and Volumes: Instance" for the previous 24 hours, there is data there up until the old rhel6 server was turned off. Since moving to the new rhel8 server, the indexes all appear to be 0GB in size. I don't know enough to know whether this is an issue. It seems like it is, to me, but I am not really sure - could everything just be rebuilding on the new server, or has it become unavailable somehow? If anyone has an answer I would be glad to know. Otherwise I find out Monday morning, I guess, when the users log on to the new rhel8 server. Thanks, Michael.
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channe... See more...
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channel> has all the required info about troubled sourcetype/source/host.
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some inform... See more...
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some information that is CJIS related and needs to be in AWS or Azure Gov.   Thanks
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has b... See more...
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has been specifically allowed.   | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where NOT All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8, 10.0.0.0/8) AND All_Traffic.action=allow* by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.dest_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.protocol=icmp by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed... See more...
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed cloud vetting just last Friday.  What's going on is in the next version of Splunk Cloud (tentatively Feb/March 2024 or so) there's a change happening around how distributed search works. Unfortunately that change with combined with how I'm doing checking of capabilities in existing versions means if you're using any older version of sa-littlehelper on this new version of Splunk Cloud the `| btool` command will work for your search head, but will not return any results from your indexers (search peers), and instead give error messages about needing to have the correct capability.  This v1.2.0 release fixes that issue on the upcoming SplunkCloud version as well as still work on all current supported versions of Splunk Enterprise & Splunk Cloud too, so I wanted to get the word out that you should update and save your future self some headache (with what I view to be core fuctionality) I've also posted variations of this notice on the #splunk_cloud channel on splunk-usergroups, the Splunk subreddit, and GoSplunk discord If there are other places you think I should let me know As mentioned on the contact page on Splunkbase While this app is not formally supported, the developer (me) can be reached at teddybear@splunk.com OR in splunk-usergroups slack, @teddybfez. Responses are made on a best effort basis. Feedback is always welcome and appreciated! Learn more about splunk-usergroups slack Admins Little Helper for Splunk 
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come ... See more...
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come back (the first line is the most important). For a lot of the reports I'll need to write, the way to do this would be the match DIRECTORY INFORMATION but that DOES NOT EXIST IN THE UNDERLYING DATA and this gets complicated with what I wrote in that other post about (2) streams of data. Here is what I mean (specifics). 1. DS 1 (call data, JSON) 2. DS 2 (policy data, JSON) 3. directory.csv (inputlookup file with data, or I could query a DB using dbxquery) So if I want to match 'mylist' in that csv then I have to do it AFTER the first line, like this:   index="my_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR "connections{}.left.facets{}.number"=*) | stats values(sourcenumber) as sourcenumber values(disposition) as disposition by guid | lookup directory_listings.csv number AS sourcenumber OUTPUT lists | search lists="mylist"   This brings back the (2) Datasources (the first line), but then I have to read through 100% of it, then match the directory, then filter so this is huge 'false positive' (event to scan count ratio) I've read before about using subsearch and this works great, but then leaves out one of the data sources. In other words this: index="policyguru_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR sourcenumber=*) [ | inputlookup pg_directory_listings.csv | search lists="*mylist*" | fields number | rename number as sourcenumber | format ] | table * runs fast and is 1:1 event-to-scan, BUT OMITS disposition entirely, because it's not 'joining' data, but sending the sourcenumber up to the first line, which then EXCLUDES disposition because it doesn't match. Does that make sense? I suppose I could use this entire search AS a subsearch to get back 'guid' values and then pass that UP into another search but feels very...INCEPTION at that point!  Anyway, looking for ideas. Thank you!
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there somet... See more...
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there something wrong with my rule? (search_type=adhoc) AND NOT (role=sc_admin) AND runtime>3m Remember that I did a lab and the same rule worked. Splunk Instance version: 9.0.2305.201 Laboratory: 9.1.2308.102 Can you help me please.
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were di... See more...
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were disabled but still they are firing from same one SH cluster members and from other 2 not(which is expected). Raised multiple vendor cases but no help. Can someone help 
I read many articles about it but no one knows how to fix it.  so how can I fix it?  Error in 'IndexScopedSearch': The search failed. More than 1000000 events were found at time 1675957850.
Hi!   I want to write a query that will show me all the events that jumped because of a certain rule that I set in Mcafee. How do I do this? thank you  
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal I... See more...
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal Information (PI). It is coming in JSON and looks something like: \"addressLine1\":\"1234 Main Street\", I need to find some fields and remove the content. Yes I believe there are backslashes in there. I tested a regex on 9 and added to the transforms.conf and props.conf files on our 8.1.5 indexer but the rules didn't work. In one of my tests the rule caused an entire log entry to change to "999999999", not quite what I was expecting but now we know Splunk was applying the rule. This is one of my rules that had no affect: [address_masking] REGEX = (?<=\"addressLine1\":\")[^\"]* FORMAT = \"addressLine1\":\"100 Unknown Rd.\" DEST_KEY = _raw Found docs, looking at them now: Configure advanced extractions with field transforms - Splunk Documentation Can I get someone point out what is wrong with the above transform? Thanks!    
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want ... See more...
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want to see. If I can consolidate the lookup table, it will fix my issue, but I can't figure out how to do it. The table currently looks like this: Org Branch Role Name Org A Branch 1 President Jack Org A Branch 1 VP Jill Org A Branch 1 Manager Mary Org A Branch 2 President Hansel Org A Branch 2 VP Gretel Org A Branch 3 VP Mickey Org A Branch 3 Manager Minnie   I use the Org and Branch as matching criteria and want to pull out the names for each role.  I do not want to see multivalue fields when I am done, the current search looks like: [base search] | lookup orgchart Org Branch OUTPUTNEW Role | mvexpand Role | lookup orgchart Org Branch Role OUTPUTNEW Name This works, but the mvexpand (obviously) creates a new line for each role and I do not want multiple lines for each in my final results.  I want a single line for every Org/Branch pair showing all the Roles and names.  I am thinking the way of solving this is reformatting the lookup table to look like the table below, then modifying my lookup.  Is there a way to "transpose" just the 2 fields?  [base search] | lookup orgchart Org Branch OUTPUTNEW President, VP, Manager Org Branch President VP Manager Org A Branch 1 Jack Jill Mary Org A Branch 2 Hansel Gretel   Org A Branch 3   MIckey Minnie     Thank you!
Hi Splunkers, I would like to export logs (raw/csv) out of Splunk cloud periodically to send it to gcp pub/sub. How can I achieve this. Appreciate ideas here.
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure... See more...
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure Splunk so end users will query only a single field which use both "src_ip-index" and "src_ip" , but use "src_ip-index" in priority when available due to better performance. hope it is clear enough. Best regards,
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr00000522922023122916222... See more...
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr000005229220231229162227.    { "shrhldrsIdDsclsrRspn": { "dsclsrRspnId": "0000537ede1c5e1084490000aa7eefab", "issrDsclsrReqRef": { "issrDsclsrReqId": "eiifr000005229220231229162227", "finInstrmId": { "isin": "FR0000052292" }, "shrhldrsDsclsrRcrdDt": { "dt": { "dt": "2023-12-29" } } }, "pgntn": { "lastPgInd": true, "pgNb": "1" }, "rspndgIntrmy": { "ctctPrsn": { "emailAdr": "ipb.asset.servicing@bnpparibas.com", "nm": "IPB ASSET SERVICING" }, "id": { "anyBIC": "BNPAGB22PBG" }, "nmAndAdr": { "adr": { "adrTp": 0, "bldgNb": "10", "ctry": "GB", "ctrySubDvsn": "LONDON", "pstCd": "NW16AA", "strtNm": "HAREWOOD AVENUE", "twnNm": "LONDON" }, "nm": "BNP PARIBAS PRIME BROKERAGE" } } } }
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info... See more...
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info with a Splunk search. Here: How-to-identify-a-list-of-forwarders-sending-data I got a very useful search that I used and return me a lot of info about Forwarders, all ones: UF, HF and so on. Due I'm not on a cloud env but an on prem one, I have also to recover Splunk version used on Indexers and Search Heads. So, my question is: how should I change search got on above link to gain version from IDXs and SHs?
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | st... See more...
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | stats count The first one returns many more results than the second. (The type indicated by Splunk for this field is "number" not "string".)