All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No. The format command is only responsible for formatting the data on output (and if you don't include it explicitly, it's performed implicitly with default settings). The limit is the limit.
Hi @gillisme  The Splunk Doc(selected your version 8.2.6) suggest to copy /opt/splunk(from old to new system) and then install the Splunk on the new system. (this is important, as when Splunk is ins... See more...
Hi @gillisme  The Splunk Doc(selected your version 8.2.6) suggest to copy /opt/splunk(from old to new system) and then install the Splunk on the new system. (this is important, as when Splunk is installing, it checks the config files and it need to alter the installation depending on the config files) https://docs.splunk.com/Documentation/Splunk/8.2.6/Installation/MigrateaSplunkinstance#How_to_migrate When you migrate on *nix systems, you can extract the tar file you downloaded directly over the copied files on the new system, or use your package manager to upgrade using the downloaded package. On Windows systems, the installer updates the Splunk files automatically. Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. Start Splunk Enterprise on the new instance. Log into Splunk Enterprise with your existing credentials. After you log in, confirm that your data is intact by searching it. The 4th step -  4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir is incorrect. As the data buckets in hot buckets should be treated carefully. Pls check this below steps from the doc: How to move index buckets from one host to another If you want to retire a Splunk Enterprise instance and immediately move the data to another instance, you can move individual buckets of an index between hosts, as long as: When you copy individual bucket files, you must make sure that no bucket IDs conflict on the new system. Otherwise, Splunk Enterprise does not start. You might need to rename individual bucket directories after you move them from the source system to the target system. Roll any hot buckets on the source host from hot to warm. Review indexes.conf on the old host to get a list of the indexes on that host. On the target host, create indexes that are identical to the ones on the source system. Copy the index buckets from the source host to the target host. Restart Splunk Enterprise. PS - if any reply helped you, pls upvote/add karma points. if any reply solves your query, pls accept it as solution, thanks.
Hi @abedcx  The issue is the timestamp. i believe you found out some details from @richgalloway 's replies.  the Actual issue.. when you are searching, there are sooo many events with same timestam... See more...
Hi @abedcx  The issue is the timestamp. i believe you found out some details from @richgalloway 's replies.  the Actual issue.. when you are searching, there are sooo many events with same timestamp, so Splunk is not able to do the searching. May we know what your search query(SPL).. we can fine-tune it, so that the Splunk will need not look into sooo many events. please suggest, thanks. 
To tell Splunk to use for the date, include a DATETIME_CONFIG setting in a props.conf file.  Depending on your needs, either DATETIME_CONFIG = current or DATETIME_CONFIG = none  
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add u... See more...
Hi, Not sure if this is even a problem, but thought I would be proactive and ask what other folks have experienced.  *Note* that I am note an experienced splunk admin - I can do a few things, add users, add forwarders, and have updated to newer version, but I really don't know how to use it. Admin newby. We are running splunk enterprise v 8.2.6, on a single RHEL6 server. We need to get off of RHEL6, so plan was to migrate the splunk install to a new RHEL8 server, and then upgrade to newest splunk version. My understanding of splunk is that it is pretty self contained - to update the version, you just overwrite the /opt/splunk dir with the new splunk tar file. Our data is held in a separate filesystem, /data/splunk dir. So, the process was: 1- install splunk v 8.2.6 on the new rhel8 server, and verify it starts and works 2- shutdown old rhel6 splunk 3- copy the old rhel6 /opt/splunk dir on top of the new rhel8 /opt/splunk dir 4- copy the old rhel6 /data/splunk dir on to the new rhel8 server, in the /data/splunk dir 5- shutdown the rhel6 splunk server 6- ensure all the networking, DNS, etc is resolving to the new rhel8 server 7- start up splunk on the new rhel8 server The process was followed by me this morning, and appears to have worked. I am seeing forwarders (there are 160) check in on the new server, I can run searches on host=X and see that X has been in contact. But there is one thing I am seeing, that I don't know if it is a problem or not. If I look at "Indexes and Volumes: Instance" for the previous 24 hours, there is data there up until the old rhel6 server was turned off. Since moving to the new rhel8 server, the indexes all appear to be 0GB in size. I don't know enough to know whether this is an issue. It seems like it is, to me, but I am not really sure - could everything just be rebuilding on the new server, or has it become unavailable somehow? If anyone has an answer I would be glad to know. Otherwise I find out Monday morning, I guess, when the users log on to the new rhel8 server. Thanks, Michael.
Thank you so much for your time ,  @richgalloway    But i noticed that the splunk read the date from my csv and this date is for me not for splunk time    how can i tell splunk to not use this d... See more...
Thank you so much for your time ,  @richgalloway    But i noticed that the splunk read the date from my csv and this date is for me not for splunk time    how can i tell splunk to not use this date (that is in my csv ) and make splunk to generate a date when indexing the data    in other words and as you can see in my bellow screenshot my date is the same and duplicated and i have more than 3 billion recoreds most of them same date and this date it's for me so how can i tell splunk to not use this date   
How does the current search fail to meet expectations?  What are those expectations? I'm not sure CIDRs are supported in the tstats command.
So would this work? [address_masking] REGEX = (\\"addressLine1\\":\\")([^\\"]+)(\\") FORMAT = $1(masked)$3
There is a hardcoded 256K (262144) fields limit on S2S receiver. So the connection will be terminated. It's likely that HF/SH will retry sending same data again, thus blocking HF/SH permanently.    ... See more...
There is a hardcoded 256K (262144) fields limit on S2S receiver. So the connection will be terminated. It's likely that HF/SH will retry sending same data again, thus blocking HF/SH permanently.    Check if there is any issue with the field extraction on FWD side. After all 256K fields are too many for an event. Assuming you still need that event with 256K+ fields. Here is what you do. 1. Move all props.conf/transforms.conf settings for the input source/sourcetype in question. ( Note ERROR log on indexing side provides source/sourcetype/host). 2. Add following config in the inputs.conf of the source stanza in question so that parsing is moved from HF/SH to IDX tier. queue = indexQueue
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channe... See more...
ERROR TcpInputProc [nnnnnn FwdDataReceiverThread] - Encountered Streaming S2S error=Too many fields. fieldCount=nnnnn with source::<channel> for data received from src=x.x.x.x:nnnn. Where <channel> has all the required info about troubled sourcetype/source/host.
I've run into that 10k limit before, for sure, but this is also something that I thought | format helped with? "mylist" directory, for example, might have 50k entries, but it's returned as a single ... See more...
I've run into that 10k limit before, for sure, but this is also something that I thought | format helped with? "mylist" directory, for example, might have 50k entries, but it's returned as a single line (1 row). thank you!  
You should contact your Splunk account team to find out if you're allowed to use your license that way.  I suspect not, but they would have the definitive answer.
Hi @splunkettes  - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @splunkettes  - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Just remember about the caveats regarding using subsearches (the limit for the execution time and for the number of returned results)
Well... if your data looks like that: \"addressLine1\":\"1234 Main Street\", And your regex looks like (?<=\"addressLine1\":\")[^\"]*  It won't match. Remember that in regex backslashes are used... See more...
Well... if your data looks like that: \"addressLine1\":\"1234 Main Street\", And your regex looks like (?<=\"addressLine1\":\")[^\"]*  It won't match. Remember that in regex backslashes are used to escape things. If you need to match the literal string \" you need to escape the backslash to match it literally. Like this: \\" In your regex backslashes are silently ignored since there is nothing after them that requires escaping so the following characters is taken literally (as it would be without the backslash as well). Also your negated character class [^\"] is probably not what you wanted it to be - the backslash in this case is not needed - there is nothing to escape about the quote mark.  
Thanks for response @PickleRick  Here my logic is for every 30mins one event1 will generate and within 5mins another event2 has to generate. If not it has to trigger the alert. Hope you unders... See more...
Thanks for response @PickleRick  Here my logic is for every 30mins one event1 will generate and within 5mins another event2 has to generate. If not it has to trigger the alert. Hope you understand the logic.
Thanks for response. For using eval condition and count the values not giving proper results.  Here my logic is for every 30mins one event1 will generate and within 5mins another event2 has to ... See more...
Thanks for response. For using eval condition and count the values not giving proper results.  Here my logic is for every 30mins one event1 will generate and within 5mins another event2 has to generate. If not it has to trigger the alert.
Hi @Wynd - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that y... See more...
Hi @Wynd - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
@phanTom  Correct, multiple artifacts in a container upon creation. It looks like there are duplicate values, however the artifact ID is different. I do have multi-value fields configured (defa... See more...
@phanTom  Correct, multiple artifacts in a container upon creation. It looks like there are duplicate values, however the artifact ID is different. I do have multi-value fields configured (default). Is that where you are suggesting making the change?
@inventsekar @phanTom  Thank you. So the remaining disconnect to me is when creating an [automation] playbook you appear to need to assign it a label to run against. In this instance, could I a... See more...
@inventsekar @phanTom  Thank you. So the remaining disconnect to me is when creating an [automation] playbook you appear to need to assign it a label to run against. In this instance, could I apply something like "on-demand" as the label (or tag?) to prevent it from being run automatically?