All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Could not load lookup=LOOKUP-minemeldfeeds_dest_lookup I am getting this error in one of the dashboards panels , any solutions?
Hi @hieuba Could you pls share with us your old dashboard query(SPL) - custom Missle Map Dasboard so, that we can try to reproduce it on Dashboard Studio, thanks. 
Hi @inventsekar , you're correct, i have a custom Missle Map Dasboard (only change js code), and i want to defined its as a visualization type in Splunk Dashboard Studio.
Here is the contents of that page. I have redacted out a little bit of info relating to the environment.      
Hi @jbates58 Yes, at times the retention policy may give difficult times.  in DMC Server, Pls check this...  Settings > Monitoring Console > Indexing > Indexes and Volumes > Index Detail: Instance ... See more...
Hi @jbates58 Yes, at times the retention policy may give difficult times.  in DMC Server, Pls check this...  Settings > Monitoring Console > Indexing > Indexes and Volumes > Index Detail: Instance EDIT - Pls check the docs at https://docs.splunk.com/Documentation/Splu nk/9.1.2/Admin/Indexesconf one thing to remember - frozenTimePeriodInSecs vs maxTotalDataSizeMB - can give confusion as well (i remember whichever comes first will work and take precedence over the other)  
Hi @GIA ... Lets troubleshoot bit by bit...  pls try these.. i feel something wrong with >>> [|inputlookup internal_ranges.csv |] <<< | tstats summariesonly=true allow_old_summaries=true values(All... See more...
Hi @GIA ... Lets troubleshoot bit by bit...  pls try these.. i feel something wrong with >>> [|inputlookup internal_ranges.csv |] <<< | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv |]) or | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv ])   
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from... See more...
Hi All, I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this. I have an environment where Splunk is ingesting syslog from 2 firewalls. The logs are only audit / management related, and these need to be sent to a sperate server for compliance (hence splunk). I  want to configure a retention policy where this data is deleted after 1 year, as that is the specific requirement. From what i can tell, i just need to add the "frozentimeinseconds" line to the index conf file for the "main" index (as this is where the events are going) Current ingestion is ~150,000 events per day. And daily ingestion is ~30-35MB.However, this is subject to change in the future as more firewalls come online etc.. There is plenty of storage available. However the requirement is just 1 year of searchable data. But I keep seeing things about hot/warm/cold/frozen etc.. and i just dont get it. All thats needed is 1 year of searchable data, anything older than (time.now() - 365 days) can be deleted.   Can someone please assist me with what i need to do to make this work
What I need help with is how to use lookups tables for the IPs in this search. I have several rules similar to this one but I can't add any IPs inline, I have to use lookups for those.    Below... See more...
What I need help with is how to use lookups tables for the IPs in this search. I have several rules similar to this one but I can't add any IPs inline, I have to use lookups for those.    Below is how I am writing it but it's obviously wrong. Thanks | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv |]) AND (All_Traffic.dest_ip [|inputlookup internal_ranges.csv |]) AND (All_Traffic.action="allow*") by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where (All_Traffic.src_ip IN [|inputlookup internal_ranges.csv |]) AND NOT (All_Traffic.dest_ip IN [|inputlookup internal_ranges.csv|]) AND NOT (All_Traffic.protocol=icmp) by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51    
I don't want to learn regex, I want to replace personal information with fixed strings. Can someone at Splunk give me the correct expression to use? Since testing in other environments doesn't help, ... See more...
I don't want to learn regex, I want to replace personal information with fixed strings. Can someone at Splunk give me the correct expression to use? Since testing in other environments doesn't help, and Splunk needs a Restart just to try out a rule this is really painful.   
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment a... See more...
Hello,   I have a CSV file with many MANY columns (in my case there are 7334 columns with an average length of 145-146 chars each. This is a telemetry file exported from some networking equipment and this is just part of the exported data... The file has over 1000 data rows but I'm just trying to add 5 rows at the moment. Trying to create an input for the file fails when adding more that 4175 columns with the following error: "Accumulated a line of 512256 bytes while reading a structured header, giving up parsing header" I have already tried to increase all TRUNCATION settings to well above this value (several orders of magnitude) as well as the "[kv]" limits in the "limits.conf" file. Nothing helps. I searched the forum here but couldn't find anything relevant. A Google search yielded two results, one where people just decided that headers that are too long are the user's problem and did not offer any resolution (not even to say it's not possible). The other result just went unanswered. Couldn't find anything relevant in the Splunk online documentation or REST API specifications either. I will also mention that processing the full data file with Python using either the standard csv parser or Pandas works just fine and very quickly. The total file size is ~92MB which is not big at all IMHO. My Splunk info: Version:9.1.2 Build:b6b9c8185839 Server:834f30dfffad Products:hadoop Needless to say the web frontend crashes entirely when I try to create the input so I'm doing everything via the Python SDK now. Any ideas if this can be fixed to I can add all of my data?
At a glance, a lookup in the data model definition should work correctly if as previously noted, the lookup definition and lookup source are correctly exported relative to the data model and everythi... See more...
At a glance, a lookup in the data model definition should work correctly if as previously noted, the lookup definition and lookup source are correctly exported relative to the data model and everything is correctly replicated to the indexers. What happens when you execute the derived data model search directly? It should contain, for example, with a dataset named Foo and a lookup named bar: ... | lookup bar baz output qux | rename baz as Foo.baz | rename qux as Foo.qux | ... and as with other fields, the new fields should be addressable using their dataset prefix. Does an unaccelerated data model return the field?
Hi @bhagyashriyan, This is a challenge in Splunk Cloud at scale. If you manage data flow at the source or integration layers, you may prefer to tee your data to both Splunk Cloud and Google Cloud Pu... See more...
Hi @bhagyashriyan, This is a challenge in Splunk Cloud at scale. If you manage data flow at the source or integration layers, you may prefer to tee your data to both Splunk Cloud and Google Cloud Pub/Sub at one of those layers. Otherwise, you can execute relatively simple saved searches in Splunk Cloud using an external client and stream the output to Google Cloud Pub/Sub. For example: index=foo | fields - _raw | table * will return _time and all fields available at search time from the search's execution context (user and app). Note that results are returned in _time reversed order, newest to oldest. In Google Cloud, you can use a combination of low cost services to periodically execute the search via the Splunk Cloud REST API in batches over fixed _time intervals and stream the results to Google Cloud Pub/Sub.
Hi @tscroggins , thank you for your answer. I don't have automatic lookups and lookups and knowledge bundles should be correctly replicated because we are on Splunk Cloud. I could check this openi... See more...
Hi @tscroggins , thank you for your answer. I don't have automatic lookups and lookups and knowledge bundles should be correctly replicated because we are on Splunk Cloud. I could check this opening a case to Support. Thank you again for your help. Ciao. Giuseppe
Yes, it should be displayed, but it may be cached. Did you restart splunkweb or splunk after removing the files?
Hi @gcusello, Are automatic lookups working correctly, is the lookup replicated, and is the knowledge bundle up to date and replicating?
The steps for upgrading an HF are exactly the same as those for upgrading an indexer or search head.  The good news is the old version of the HF will work with the newer version of the indexers and D... See more...
The steps for upgrading an HF are exactly the same as those for upgrading an indexer or search head.  The good news is the old version of the HF will work with the newer version of the indexers and DS. This old answer may help a little: https://community.splunk.com/t5/Installation/Upgrade-to-6-2-fails-on-windows-7-with-quot-ERROR-while-running/m-p/175801
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=no... See more...
I have this query in my report scuedhled to run every week, but results are for all time, how can i fix ? index=dlp user!=N/A threat_type=OUTGOING_EMAIL signature="EP*block" earliest=-1w@w latest=now | stats count by user _time | lookup AD_enrich.csv user OUTPUTNEW userPrincipalName AS Mail, displayName AS FullName, wwwHomePage AS ComputerName, mobile AS Mobile, description AS Department, ManagerName, ManagerLastName | table _time, Users, FullName, Mail, Mobile, ComputerName, Department, ManagerName, ManagerLastName, count
Hi @PickleRick, thank you for your answer. Yes, it's a Global shared lookup with read grants to all, infact it runs in the search. It seems that there's something strange in the Datamodel construc... See more...
Hi @PickleRick, thank you for your answer. Yes, it's a Global shared lookup with read grants to all, infact it runs in the search. It seems that there's something strange in the Datamodel construction, as you can see in the shared screenshot. But it's in Splunk Cloud, so it should be correct! Ciao. Giuseppe
Hmm... Everything OK with export/permission settings on the lookup?
The problem here is in deciding on the proper logic for the alert. If you're supposed to get an event at the same time of the hour (like around XX:32) and then next one within 5 minutes, you can get... See more...
The problem here is in deciding on the proper logic for the alert. If you're supposed to get an event at the same time of the hour (like around XX:32) and then next one within 5 minutes, you can get away with scheduling a search at - for example each XX:08 and XX:38 and searching some 6-7 minutes into the past and checking if you have less than two results. That's the simplest solution and can often be enough. But if your case is more complicated (like the time the events are generated is more "floating" around the hour), you might need to schedule a search more often, search through data some 30+ minutes back and calculate the event lag as I mentioned. It's about defining a problem precisely