All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Ap... See more...
Hi at all, I'm trying to add a field from a lookup in a Data Model, but the field is always empty in the Data Model, e.g runing a search like the following:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype   but if I use the lookup command, it runs:   | tstats count values(My_Datamodel.Application) AS Application FROM Datamodel=My_Datamodel BY sourcetype | lookup my_lookup.csv sourcetype OUTPUT Application   So the lookup is correct. When I try to add the field it's possible to add it but it's still always empty: Does anyone experienced this behavior and found a workaround? Ciao. Giuseppe
Well, no. That's not how you do it. 1. I know it's not gonna help here but you forgot to do one very important thing as this is something you're not experienced with - plan and test. You should have... See more...
Well, no. That's not how you do it. 1. I know it's not gonna help here but you forgot to do one very important thing as this is something you're not experienced with - plan and test. You should have deployed a test instance (possibly using a trial version of Splunk), then do your planned migration and verify if everything is OK. 2. You say you have RHEL servers. Are you even using RPM packages? You should. 3. If the instructions say "unpack new version over your existing files", it's _not_ the same as unpack and then overwrite with your existing files. At this moment it's hard to tell what config you ended up with and what's really happening underneath. You can check _internal index for errors. You can see whether your events are ingested into your indexes (for example by doing tstats over recent short period of time). In this case you might simply have a misnamed server (many reports in MC search for host with your server's name - if you changed it, it could have caused some confusion for Splunk).
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade st... See more...
We successfully completed splunk upgrade from version 8.1.4 to 9.0.6 on indexers,search heads,and ds but we are facing issue while upgrading on HF.Could any one help with the whole  splunk upgrade steps on HF ? Also please suggest solution to fix the error we are facing after untar the installation file and starti g service with accept license.   couldn't run "splunk" migrate: No such file or directory ERROR while running rename-cluster-app migration
we don't have permission to install the app. i will try to ask the infra team again. is there an option to add the alert result to this query? | rest splunk_server=local /servicesNS/-/-/saved/sea... See more...
we don't have permission to install the app. i will try to ask the infra team again. is there an option to add the alert result to this query? | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title,author,description,"eai:acl.owner","next_scheduled_time","action.email.to"
This would work but possibly not the way you meant. I suppose you want this part ([^\\"]+) To match everything up to (not including), the closing \" It doesn't work that way. It will match any se... See more...
This would work but possibly not the way you meant. I suppose you want this part ([^\\"]+) To match everything up to (not including), the closing \" It doesn't work that way. It will match any sequence of any characters which are not either a backslash or a quote. Which means if your string would contain some escaped character (like \'), your match would terminate there. And since you explicitly want the \" part immediately adter that , the whole regex won't match. Oh, and since you're only matching the "static" parts of your events your match groups that you use for FORMAT will only contain those which is probably not what you want. You could try to fiddle with negative lookaheads/lookbehinds like (.*?\\"addressLine1\\":\\").*(?<!\\")(\\".*)
No. The format command is only responsible for formatting the data on output (and if you don't include it explicitly, it's performed implicitly with default settings). The limit is the limit.
Hi @gillisme  The Splunk Doc(selected your version 8.2.6) suggest to copy /opt/splunk(from old to new system) and then install the Splunk on the new system. (this is important, as when Splunk is ins... See more...
Hi @gillisme  The Splunk Doc(selected your version 8.2.6) suggest to copy /opt/splunk(from old to new system) and then install the Splunk on the new system. (this is important, as when Splunk is installing, it checks the config files and it need to alter the installation depending on the config files) https://docs.splunk.com/Documentation/Splunk/8.2.6/Installation/MigrateaSplunkinstance#How_to_migrate When you migrate on *nix systems, you can extract the tar file you downloaded directly over the copied files on the new system, or use your package manager to upgrade using the downloaded package. On Windows systems, the installer updates the Splunk files automatically. Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. Start Splunk Enterprise on the new instance. Log into Splunk Enterprise with your existing credentials. After you log in, confirm that your data is intact by searching it. The 4th step -  4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir is incorrect. As the data buckets in hot buckets should be treated carefully. Pls check this below steps from the doc: How to move index buckets from one host to another If you want to retire a Splunk Enterprise instance and immediately move the data to another instance, you can move individual buckets of an index between hosts, as long as: When you copy individual bucket files, you must make sure that no bucket IDs conflict on the new system. Otherwise, Splunk Enterprise does not start. You might need to rename individual bucket directories after you move them from the source system to the target system. Roll any hot buckets on the source host from hot to warm. Review indexes.conf on the old host to get a list of the indexes on that host. On the target host, create indexes that are identical to the ones on the source system. Copy the index buckets from the source host to the target host. Restart Splunk Enterprise. PS - if any reply helped you, pls upvote/add karma points. if any reply solves your query, pls accept it as solution, thanks.
Hi @abedcx  The issue is the timestamp. i believe you found out some details from @richgalloway 's replies.  the Actual issue.. when you are searching, there are sooo many events with same timestam... See more...
Hi @abedcx  The issue is the timestamp. i believe you found out some details from @richgalloway 's replies.  the Actual issue.. when you are searching, there are sooo many events with same timestamp, so Splunk is not able to do the searching. May we know what your search query(SPL).. we can fine-tune it, so that the Splunk will need not look into sooo many events. please suggest, thanks. 
To tell Splunk to use for the date, include a DATETIME_CONFIG setting in a props.conf file.  Depending on your needs, either DATETIME_CONFIG = current or DATETIME_CONFIG = none  
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add u... See more...
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add users, add forwarders, and have updated to newer version, but I really don't know how to use it. Admin newby. We are running splunk enterprise v 8.2.6, on a single RHEL6 server. We need to get off of RHEL6, so plan was to migrate the splunk install to a new RHEL8 server, and then upgrade to newest splunk version. My understanding of splunk is that it is pretty self contained - to update the version, you just overwrite the /opt/splunk dir with the new splunk tar file. Our data is held in a separate filesystem, /data/splunk dir. So, the process was: 1- install splunk v 8.2.6 on the new rhel8 server, and verify it starts and works 2- shutdown old rhel6 splunk 3- copy the old rhel6 /opt/splunk dir on top of the new rhel8 /opt/splunk dir 4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir 5- shutdown the rhel6 splunk server 6- ensure all the networking, DNS, etc is resolving to the new rhel8 server 7- start up splunk on the new rhel8 server The process was followed by me this morning, and appears to have worked. I am seeing forwarders (there are 160) check in on the new server, I can run searches on host=X and see that X has been in contact. But there is one thing I am seeing, that I don't know if it is a problem or not. If I look at "Indexes and Volumes: Instance" for the previous 24 hours, there is data there up until the old rhel6 server was turned off. Since moving to the new rhel8 server, the indexes all appear to be 0GB in size. I don't know enough to know whether this is an issue. It seems like it is, to me, but I am not really sure - could everything just be rebuilding on the new server, or has it become unavailable somehow? If anyone has an answer I would be glad to know. Otherwise I find out Monday morning, I guess, when the users log on to the new rhel8 server. Thanks, Michael.
Thank you so much for your time ,  @richgalloway    But i noticed that the splunk read the date from my csv and this date is for me not for splunk time    how can i tell splunk to not use this d... See more...
Thank you so much for your time ,  @richgalloway    But i noticed that the splunk read the date from my csv and this date is for me not for splunk time    how can i tell splunk to not use this date (that is in my csv ) and make splunk to generate a date when indexing the data    in other words and as you can see in my bellow screenshot my date is the same and duplicated and i have more than 3 billion recoreds most of them same date and this date it's for me so how can i tell splunk to not use this date   
How does the current search fail to meet expectations?  What are those expectations? I'm not sure CIDRs are supported in the tstats command.
So would this work? [address_masking] REGEX = (\\"addressLine1\\":\\")([^\\"]+)(\\") FORMAT = $1(masked)$3
There is a hardcoded 256K (262144) fields limit on S2S receiver. So the connection will be terminated. It's likely that HF/SH will retry sending same data again, thus blocking HF/SH permanently.    ... See more...
There is a hardcoded 256K (262144) fields limit on S2S receiver. So the connection will be terminated. It's likely that HF/SH will retry sending same data again, thus blocking HF/SH permanently.    Check if there is any issue with the field extraction on FWD side. After all 256K fields are too many for an event. Assuming you still need that event with 256K+ fields. Here is what you do. 1. Move all props.conf/transforms.conf settings for the input source/sourcetype in question. ( Note ERROR log on indexing side provides source/sourcetype/host). 2. Add following config in the inputs.conf of the source stanza in question so that parsing is moved from HF/SH to IDX tier. queue = indexQueue
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channe... See more...
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channel> has all the required info about troubled sourcetype/source/host.
I've run into that 10k limit before, for sure, but this is also something that I thought | format helped with? "mylist" directory, for example, might have 50k entries, but it's returned as a single ... See more...
I've run into that 10k limit before, for sure, but this is also something that I thought | format helped with? "mylist" directory, for example, might have 50k entries, but it's returned as a single line (1 row). thank you!  
You should contact your Splunk account team to find out if you're allowed to use your license that way.  I suspect not, but they would have the definitive answer.
Hi @splunkettes  - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @splunkettes  - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Just remember about the caveats regarding using subsearches (the limit for the execution time and for the number of returned results)