All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @GIA ... Lets troubleshoot bit by bit...  pls try these.. i feel something wrong with >>> [|inputlookup internal_ranges.csv |] <<< | tstats summariesonly=true allow_old_summaries=true values(All... See more...
Hi @GIA ... Lets troubleshoot bit by bit...  pls try these.. i feel something wrong with >>> [|inputlookup internal_ranges.csv |] <<< | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv |]) or | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv ])   
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from... See more...
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from 2 firewalls. The logs are only audit / management related, and these need to be sent to a sperate server for compliance (hence splunk). I  want to configure a retention policy where this data is deleted after 1 year, as that is the specific requirement. From what i can tell, i just need to add the "frozentimeinseconds" line to the index conf file for the "main" index (as this is where the events are going) Current ingestion is ~150,000 events per day. And daily ingestion is ~30-35MB.However, this is subject to change in the future as more firewalls come online etc.. There is plenty of storage available. However the requirement is just 1 year of searchable data. But I keep seeing things about hot/warm/cold/frozen etc.. and i just dont get it. All thats needed is 1 year of searchable data, anything older than (time.now() - 365 days) can be deleted.   Can someone please assist me with what i need to do to make this work
What I need help with is how to use lookups tables for the IPs in this search. I have several rules similar to this one but I can't add any IPs inline, I have to use lookups for those.    Below... See more...
What I need help with is how to use lookups tables for the IPs in this search. I have several rules similar to this one but I can't add any IPs inline, I have to use lookups for those.    Below is how I am writing it but it's obviously wrong. Thanks | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv |]) AND (All_Traffic.dest_ip [|inputlookup internal_ranges.csv |]) AND (All_Traffic.action="allow*") by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where (All_Traffic.src_ip IN [|inputlookup internal_ranges.csv |]) AND NOT (All_Traffic.dest_ip IN [|inputlookup internal_ranges.csv|]) AND NOT (All_Traffic.protocol=icmp) by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51    
I don't want to learn regex, I want to replace personal information with fixed strings. Can someone at Splunk give me the correct expression to use? Since testing in other environments doesn't help, ... See more...
I don't want to learn regex, I want to replace personal information with fixed strings. Can someone at Splunk give me the correct expression to use? Since testing in other environments doesn't help, and Splunk needs a Restart just to try out a rule this is really painful.   
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment a... See more...
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment and this is just part of the exported data... The file has over 1000 data rows but I'm just trying to add 5 rows at the moment. Trying to create an input for the file fails when adding more that 4175 columns with the following error: "Accumulated a line of 512256 bytes while reading a structured header, giving up parsing header" I have already tried to increase all TRUNCATION settings to well above this value (several orders of magnitude) as well as the "[kv]" limits in the "limits.conf" file. Nothing helps. I searched the forum here but couldn't find anything relevant. A Google search yielded two results, one where people just decided that headers that are too long are the user's problem and did not offer any resolution (not even to say it's not possible). The other result just went unanswered. Couldn't find anything relevant in the Splunk online documentation or REST API specifications either. I will also mention that processing the full data file with Python using either the standard csv parser or Pandas works just fine and very quickly. The total file size is ~92MB which is not big at all IMHO. My Splunk info: Version:9.1.2 Build:b6b9c8185839 Server:834f30dfffad Products:hadoop Needless to say the web frontend crashes entirely when I try to create the input so I'm doing everything via the Python SDK now. Any ideas if this can be fixed to I can add all of my data?
At a glance, a lookup in the data model definition should work correctly if as previously noted, the lookup definition and lookup source are correctly exported relative to the data model and everythi... See more...
At a glance, a lookup in the data model definition should work correctly if as previously noted, the lookup definition and lookup source are correctly exported relative to the data model and everything is correctly replicated to the indexers. What happens when you execute the derived data model search directly? It should contain, for example, with a dataset named Foo and a lookup named bar: ... | lookup bar baz output qux | rename baz as Foo.baz | rename qux as Foo.qux | ... and as with other fields, the new fields should be addressable using their dataset prefix. Does an unaccelerated data model return the field?
Hi @bhagyashriyan, This is a challenge in Splunk Cloud at scale. If you manage data flow at the source or integration layers, you may prefer to tee your data to both Splunk Cloud and Google Cloud Pu... See more...
Hi @bhagyashriyan, This is a challenge in Splunk Cloud at scale. If you manage data flow at the source or integration layers, you may prefer to tee your data to both Splunk Cloud and Google Cloud Pub/Sub at one of those layers. Otherwise, you can execute relatively simple saved searches in Splunk Cloud using an external client and stream the output to Google Cloud Pub/Sub. For example: index=foo | fields - _raw | table * will return _time and all fields available at search time from the search's execution context (user and app). Note that results are returned in _time reversed order, newest to oldest. In Google Cloud, you can use a combination of low cost services to periodically execute the search via the Splunk Cloud REST API in batches over fixed _time intervals and stream the results to Google Cloud Pub/Sub.
Hi @tscroggins , thank you for your answer. I don't have automatic lookups and lookups and knowledge bundles should be correctly replicated because we are on Splunk Cloud. I could check this openi... See more...
Hi @tscroggins , thank you for your answer. I don't have automatic lookups and lookups and knowledge bundles should be correctly replicated because we are on Splunk Cloud. I could check this opening a case to Support. Thank you again for your help. Ciao. Giuseppe
Yes, it should be displayed, but it may be cached. Did you restart splunkweb or splunk after removing the files?
Hi @gcusello, Are automatic lookups working correctly, is the lookup replicated, and is the knowledge bundle up to date and replicating?
The steps for upgrading an HF are exactly the same as those for upgrading an indexer or search head.  The good news is the old version of the HF will work with the newer version of the indexers and D... See more...
The steps for upgrading an HF are exactly the same as those for upgrading an indexer or search head.  The good news is the old version of the HF will work with the newer version of the indexers and DS. This old answer may help a little: https://community.splunk.com/t5/Installation/Upgrade-to-6-2-fails-on-windows-7-with-quot-ERROR-while-running/m-p/175801
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=no... See more...
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=now | stats count by user _time | lookup AD_enrich.csv user OUTPUTNEW userPrincipalName AS Mail, displayName AS FullName, wwwHomePage AS ComputerName, mobile AS Mobile, description AS Department, ManagerName, ManagerLastName | table _time, Users, FullName, Mail, Mobile, ComputerName, Department, ManagerName, ManagerLastName, count
Hi @PickleRick, thank you for your answer. Yes, it's a Global shared lookup with read grants to all, infact it runs in the search. It seems that there's something strange in the Datamodel construc... See more...
Hi @PickleRick, thank you for your answer. Yes, it's a Global shared lookup with read grants to all, infact it runs in the search. It seems that there's something strange in the Datamodel construction, as you can see in the shared screenshot. But it's in Splunk Cloud, so it should be correct! Ciao. Giuseppe
Hmm... Everything OK with export/permission settings on the lookup?
The problem here is in deciding on the proper logic for the alert. If you're supposed to get an event at the same time of the hour (like around XX:32) and then next one within 5 minutes, you can get... See more...
The problem here is in deciding on the proper logic for the alert. If you're supposed to get an event at the same time of the hour (like around XX:32) and then next one within 5 minutes, you can get away with scheduling a search at - for example each XX:08 and XX:38 and searching some 6-7 minutes into the past and checking if you have less than two results. That's the simplest solution and can often be enough. But if your case is more complicated (like the time the events are generated is more "floating" around the hour), you might need to schedule a search more often, search through data some 30+ minutes back and calculate the event lag as I mentioned. It's about defining a problem precisely
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Ap... See more...
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype   but if I use the lookup command, it runs:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype | lookup my_lookup.csv sourcetype OUTPUT Application   So the lookup is correct. When I try to add the field it's possible to add it but it's still always empty: Does anyone experienced this behavior and found a workaround? Ciao. Giuseppe
Well, no. That's not how you do it. 1. I know it's not gonna help here but you forgot to do one very important thing as this is something you're not experienced with - plan and test. You should have... See more...
Well, no. That's not how you do it. 1. I know it's not gonna help here but you forgot to do one very important thing as this is something you're not experienced with - plan and test. You should have deployed a test instance (possibly using a trial version of Splunk), then do your planned migration and verify if everything is OK. 2. You say you have RHEL servers. Are you even using RPM packages? You should. 3. If the instructions say "unpack new version over your existing files", it's _not_ the same as unpack and then overwrite with your existing files. At this moment it's hard to tell what config you ended up with and what's really happening underneath. You can check _internal index for errors. You can see whether your events are ingested into your indexes (for example by doing tstats over recent short period of time). In this case you might simply have a misnamed server (many reports in MC search for host with your server's name - if you changed it, it could have caused some confusion for Splunk).
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade st... See more...
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade steps on HF ? Also please suggest solution to fix the error we are facing after untar the installation file and starti g service with accept license.   couldn't run "splunk" migrate: No such file or directory ERROR while running rename-cluster-app migration
we don't have permission to install the app. i will try to ask the infra team again. is there an option to add the alert result to this query? | rest splunk_server=local /servicesNS/-/-/saved/sea... See more...
we don't have permission to install the app. i will try to ask the infra team again. is there an option to add the alert result to this query? | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title,author,description,"eai:acl.owner","next_scheduled_time","action.email.to"