All Topics

Top

All Topics

Hello, I want to add a user to Splunk. I have a free license trial, and there is no "USER" oder "ADD USER" on my interface on splunk enterprise. How else can i do that?
Does anyone know of any good hands on guided trainings for splunk….There are a lot out there, but the majority that I have found are just videos that I can watch, I don't really learn that best that ... See more...
Does anyone know of any good hands on guided trainings for splunk….There are a lot out there, but the majority that I have found are just videos that I can watch, I don't really learn that best that way. If anyone knows of some please let me know!  (if anyone knows the same for any crowdstrike trainings, please let me know!)
Hi All, I need to display the results same as like below  |chart count over API by StatusCode  API  200 300 400 400 total --   ---      ----     --      --- but I need to display the results... See more...
Hi All, I need to display the results same as like below  |chart count over API by StatusCode  API  200 300 400 400 total --   ---      ----     --      --- but I need to display the results behind API more fields like host and method as well API host method 200 300 400  total  --     ---    ---              --      ---    ---- please help to get the results
Could not load lookup=LOOKUP-minemeldfeeds_dest_lookup I am getting this error in one of the dashboards panels , any solutions?
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from... See more...
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from 2 firewalls. The logs are only audit / management related, and these need to be sent to a sperate server for compliance (hence splunk). I  want to configure a retention policy where this data is deleted after 1 year, as that is the specific requirement. From what i can tell, i just need to add the "frozentimeinseconds" line to the index conf file for the "main" index (as this is where the events are going) Current ingestion is ~150,000 events per day. And daily ingestion is ~30-35MB.However, this is subject to change in the future as more firewalls come online etc.. There is plenty of storage available. However the requirement is just 1 year of searchable data. But I keep seeing things about hot/warm/cold/frozen etc.. and i just dont get it. All thats needed is 1 year of searchable data, anything older than (time.now() - 365 days) can be deleted.   Can someone please assist me with what i need to do to make this work
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment a... See more...
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment and this is just part of the exported data... The file has over 1000 data rows but I'm just trying to add 5 rows at the moment. Trying to create an input for the file fails when adding more that 4175 columns with the following error: "Accumulated a line of 512256 bytes while reading a structured header, giving up parsing header" I have already tried to increase all TRUNCATION settings to well above this value (several orders of magnitude) as well as the "[kv]" limits in the "limits.conf" file. Nothing helps. I searched the forum here but couldn't find anything relevant. A Google search yielded two results, one where people just decided that headers that are too long are the user's problem and did not offer any resolution (not even to say it's not possible). The other result just went unanswered. Couldn't find anything relevant in the Splunk online documentation or REST API specifications either. I will also mention that processing the full data file with Python using either the standard csv parser or Pandas works just fine and very quickly. The total file size is ~92MB which is not big at all IMHO. My Splunk info: Version:9.1.2 Build:b6b9c8185839 Server:834f30dfffad Products:hadoop Needless to say the web frontend crashes entirely when I try to create the input so I'm doing everything via the Python SDK now. Any ideas if this can be fixed to I can add all of my data?
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=no... See more...
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=now | stats count by user _time | lookup AD_enrich.csv user OUTPUTNEW userPrincipalName AS Mail, displayName AS FullName, wwwHomePage AS ComputerName, mobile AS Mobile, description AS Department, ManagerName, ManagerLastName | table _time, Users, FullName, Mail, Mobile, ComputerName, Department, ManagerName, ManagerLastName, count
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Ap... See more...
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype   but if I use the lookup command, it runs:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype | lookup my_lookup.csv sourcetype OUTPUT Application   So the lookup is correct. When I try to add the field it's possible to add it but it's still always empty: Does anyone experienced this behavior and found a workaround? Ciao. Giuseppe
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade st... See more...
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade steps on HF ? Also please suggest solution to fix the error we are facing after untar the installation file and starti g service with accept license.   couldn't run "splunk" migrate: No such file or directory ERROR while running rename-cluster-app migration
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add u... See more...
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add users, add forwarders, and have updated to newer version, but I really don't know how to use it. Admin newby. We are running splunk enterprise v 8.2.6, on a single RHEL6 server. We need to get off of RHEL6, so plan was to migrate the splunk install to a new RHEL8 server, and then upgrade to newest splunk version. My understanding of splunk is that it is pretty self contained - to update the version, you just overwrite the /opt/splunk dir with the new splunk tar file. Our data is held in a separate filesystem, /data/splunk dir. So, the process was: 1- install splunk v 8.2.6 on the new rhel8 server, and verify it starts and works 2- shutdown old rhel6 splunk 3- copy the old rhel6 /opt/splunk dir on top of the new rhel8 /opt/splunk dir 4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir 5- shutdown the rhel6 splunk server 6- ensure all the networking, DNS, etc is resolving to the new rhel8 server 7- start up splunk on the new rhel8 server The process was followed by me this morning, and appears to have worked. I am seeing forwarders (there are 160) check in on the new server, I can run searches on host=X and see that X has been in contact. But there is one thing I am seeing, that I don't know if it is a problem or not. If I look at "Indexes and Volumes: Instance" for the previous 24 hours, there is data there up until the old rhel6 server was turned off. Since moving to the new rhel8 server, the indexes all appear to be 0GB in size. I don't know enough to know whether this is an issue. It seems like it is, to me, but I am not really sure - could everything just be rebuilding on the new server, or has it become unavailable somehow? If anyone has an answer I would be glad to know. Otherwise I find out Monday morning, I guess, when the users log on to the new rhel8 server. Thanks, Michael.
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channe... See more...
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channel> has all the required info about troubled sourcetype/source/host.
My name listed in my Pearson VUE account does not match my government issued ID.But i can't update my name online.There is no place for me to re edit my name.This makes it impossible for me to schedu... See more...
My name listed in my Pearson VUE account does not match my government issued ID.But i can't update my name online.There is no place for me to re edit my name.This makes it impossible for me to schedule an exam.Does anyone know how to update my account name.
Register here. This thread is for the Community Office Hours session on Observability: Usage and Data Control on Wed, April 24, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions... See more...
Register here. This thread is for the Community Office Hours session on Observability: Usage and Data Control on Wed, April 24, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions about your current Observability Usage and Data Control challenge or use case. Including: Metrics Pipeline Management in Splunk Infrastructure Monitoring (IM) Metric Cardinality Aggregation Rules Impact and benefits of data dropping Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Register here. This thread is for the Community Office Hours session on Observability: Application Performance Monitoring (APM) on Wed, April 10, 2024 at 1pm PT / 4pm ET.    This is your opportunit... See more...
Register here. This thread is for the Community Office Hours session on Observability: Application Performance Monitoring (APM) on Wed, April 10, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions about your current Observability APM challenge or use case, including: Sending traces to APM Tracking service performance with dashboards Setting up deployment environments AutoDetect detectors Enabling Database Query Performance Setting up business workflows Implementing high-value features (Tag Spotlight, Trace View, Service Map) Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some inform... See more...
A county owns an enterprise license.  Its cluster is in Azure Commerical.  Can Splunk enterprise also have a cluster in Azure Government (without a separate enterprise instance)?  We have some information that is CJIS related and needs to be in AWS or Azure Gov.   Thanks
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has b... See more...
I have tried using search but can't seem to get it right. Any guidance is appreciated This alert detects any traffic to an IP on the IOC list or from an IP on the   IOC list where the traffic has been specifically allowed.   | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where NOT All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8, 10.0.0.0/8) AND All_Traffic.action=allow* by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where All_Traffic.src_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.dest_ip IN (10.0.0.0/8, 10.0.0.0/8) AND NOT All_Traffic.protocol=icmp by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed... See more...
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed cloud vetting just last Friday.  What's going on is in the next version of Splunk Cloud (tentatively Feb/March 2024 or so) there's a change happening around how distributed search works. Unfortunately that change with combined with how I'm doing checking of capabilities in existing versions means if you're using any older version of sa-littlehelper on this new version of Splunk Cloud the `| btool` command will work for your search head, but will not return any results from your indexers (search peers), and instead give error messages about needing to have the correct capability.  This v1.2.0 release fixes that issue on the upcoming SplunkCloud version as well as still work on all current supported versions of Splunk Enterprise & Splunk Cloud too, so I wanted to get the word out that you should update and save your future self some headache (with what I view to be core fuctionality) I've also posted variations of this notice on the #splunk_cloud channel on splunk-usergroups, the Splunk subreddit, and GoSplunk discord If there are other places you think I should let me know As mentioned on the contact page on Splunkbase While this app is not formally supported, the developer (me) can be reached at teddybear@splunk.com OR in splunk-usergroups slack, @teddybfez. Responses are made on a best effort basis. Feedback is always welcome and appreciated! Learn more about splunk-usergroups slack Admins Little Helper for Splunk 
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come ... See more...
Ok, been learning alot about reducing event size from a recent conversation (here) and got linked a great article on search performance (this one) and an obvious key is reducing the events that come back (the first line is the most important). For a lot of the reports I'll need to write, the way to do this would be the match DIRECTORY INFORMATION but that DOES NOT EXIST IN THE UNDERLYING DATA and this gets complicated with what I wrote in that other post about (2) streams of data. Here is what I mean (specifics). 1. DS 1 (call data, JSON) 2. DS 2 (policy data, JSON) 3. directory.csv (inputlookup file with data, or I could query a DB using dbxquery) So if I want to match 'mylist' in that csv then I have to do it AFTER the first line, like this:   index="my_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR "connections{}.left.facets{}.number"=*) | stats values(sourcenumber) as sourcenumber values(disposition) as disposition by guid | lookup directory_listings.csv number AS sourcenumber OUTPUT lists | search lists="mylist"   This brings back the (2) Datasources (the first line), but then I have to read through 100% of it, then match the directory, then filter so this is huge 'false positive' (event to scan count ratio) I've read before about using subsearch and this works great, but then leaves out one of the data sources. In other words this: index="policyguru_data" resourceId="enum*" ("disposition.disposition"="TERMINATED" OR sourcenumber=*) [ | inputlookup pg_directory_listings.csv | search lists="*mylist*" | fields number | rename number as sourcenumber | format ] | table * runs fast and is 1:1 event-to-scan, BUT OMITS disposition entirely, because it's not 'joining' data, but sending the sourcenumber up to the first line, which then EXCLUDES disposition because it doesn't match. Does that make sense? I suppose I could use this entire search AS a subsearch to get back 'guid' values and then pass that UP into another search but feels very...INCEPTION at that point!  Anyway, looking for ideas. Thank you!
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there somet... See more...
I have configured a Workload Rule but it doesn't work, I need all searches that last more than 3 minutes and are not from sc_admin to stop. I tested it in the laboratory and it worked, is there something wrong with my rule? (search_type=adhoc) AND NOT (role=sc_admin) AND runtime>3m Remember that I did a lab and the same rule worked. Splunk Instance version: 9.0.2305.201 Laboratory: 9.1.2308.102 Can you help me please.
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were di... See more...
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were disabled but still they are firing from same one SH cluster members and from other 2 not(which is expected). Raised multiple vendor cases but no help. Can someone help