All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In the meantime, if you are wanting to move the existing data from your indexer to your SH then stop Splunk on both servers and copy the full directory structure for each index (usually under $SPLUNK... See more...
In the meantime, if you are wanting to move the existing data from your indexer to your SH then stop Splunk on both servers and copy the full directory structure for each index (usually under $SPLUNK_DB, by default $SPLUNK_HOME/var/lib/splunk/<indexname>) from the old indexer to the new server. After copying, ensure Splunk points to the correct path for these indexes in indexes.conf on the new instance. Restart Splunk on the new instance for the data to be available. If there are no existing indexes with the same name on the new instance, you can simply copy the directories.  Both source and destination should use the same OS and compatible Splunk versions and don't copy buckets from newer Splunk versions to much older versions. If your SH is still setup to search your IDX then you should probably disconnect it at this point as your may see duplicate data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  I'm not 100% sure if I'm following what your requirements are here, which scenario is this? 1) You want to move existing stored data from your indexer to be stored on your SH to turn it in... See more...
Hi @SN1  I'm not 100% sure if I'm following what your requirements are here, which scenario is this? 1) You want to move existing stored data from your indexer to be stored on your SH to turn it into an All-In-One? 2) Configure the indexer to forward new data as it arrives to the SH? 3) Move existing data *and* configure forwarding of new data to the SH? Please let me know so we can provide a better response.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
ok let me explain briefly ,  we are making our search head a standalone . so now i want to send some important data from indexer which is currently being stored and onboarded from the source to Se... See more...
ok let me explain briefly ,  we are making our search head a standalone . so now i want to send some important data from indexer which is currently being stored and onboarded from the source to Search head . Is this clear.
@SN1  To clarify your scenario, You want Search Head to query the indexer for data as needed or having data on the Search Head for  testing/some reason without using the indexer?
we have a index where the data is currently being stored and indexed on the indexer . Now i am making Search head standalone and i want to send the data from indexer to sh . How to do it.
There could be maximum 3 versions per day.
We are now using the Python for Scientific Computing app (v2.0.2) on a on-premise Linux instance, and planning to upgrade the app to the latest version 4.2.3. When upgrading, should we just upload t... See more...
We are now using the Python for Scientific Computing app (v2.0.2) on a on-premise Linux instance, and planning to upgrade the app to the latest version 4.2.3. When upgrading, should we just upload the app package through the splunk web and check the upgrade checkbox? Python for Scientific Computing (for Linux 64-bit) | Splunkbase
@a1bg503461  The error highlights that, opt_ca_certs_path is not defined in your configuration Are you using SSL/TLS with Elasticsearch? If yes Make sure you mention your .crt path in your config ... See more...
@a1bg503461  The error highlights that, opt_ca_certs_path is not defined in your configuration Are you using SSL/TLS with Elasticsearch? If yes Make sure you mention your .crt path in your config Eg: opt_ca_certs_path = /path/to/your/ca.crt If you are not using SSL/TLS, then try below in your config use_ssl = 0 # opt_ca_certs_path = Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@cdevoe57  As mentioned by @bowesmana  Its not best to use join, as it can sometimes cause fields from lookup to be lost. but anyway can you try below if you still want to use join, | inputlookup ... See more...
@cdevoe57  As mentioned by @bowesmana  Its not best to use join, as it can sometimes cause fields from lookup to be lost. but anyway can you try below if you still want to use join, | inputlookup system_info.csv | eval System_Name=System | join type=left System_Name [ | search index=servers sourcetype=logs | stats latest(_time) as Time by System_Name | eval mytime=strftime(Time,"%Y-%m-%dT%H:%M:%S") | eval now_time = now() | eval last_seen_ago_in_seconds = now_time - Time ] | stats values(*) as * by System_Name | lookup system_info.csv System_Name OUTPUT Location Responsible | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" | table System_Name Location Responsible MISSING or you can also try and check below, without join. index=servers sourcetype=logs | stats latest(_time) as Time by System_Name | eval last_seen_ago_in_seconds = now() - Time | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200, "MISSING", "GOOD") | where MISSING=="MISSING" | lookup system_info.csv System_Name OUTPUT Location Responsible | table System_Name Location Responsible MISSING last_seen_ago_in_seconds | sort -last_seen_ago_in_seconds Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@caschmid  \d+ matches only digits, not any word. If "This is my" is always constant, you can try below rex field=_raw "This is my (?<string>\w+)" | stats count by string Regards, Prewin Splu... See more...
@caschmid  \d+ matches only digits, not any word. If "This is my" is always constant, you can try below rex field=_raw "This is my (?<string>\w+)" | stats count by string Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
So let me start by saying I've been struggling with these lookup commands. My objective here is to use the lookup as it contains all known servers to find the servers that are not logging.  Includ... See more...
So let me start by saying I've been struggling with these lookup commands. My objective here is to use the lookup as it contains all known servers to find the servers that are not logging.  Including those that have yet to log.  This modified query now gives me the other fields.   However, the results are wrong. In the ned I need to  get the list of servers in the lookup that are not in the query results (index=servers sourcetype=logs)
Hello @heathramos ,  From the DNS Server you can find possible ERROR logs around issue by going to $SPLUNK_HOME/var/log/splunk and search for file named streamfwd.log please check the ERROR and sh... See more...
Hello @heathramos ,  From the DNS Server you can find possible ERROR logs around issue by going to $SPLUNK_HOME/var/log/splunk and search for file named streamfwd.log please check the ERROR and share here so we can help you with possible things.
Hello @Dallastek1 , Explanation given by @livehybrid is all correct here based on ERROR messages and screenshot it appears that you are missing required permissions, also make sure that you have give... See more...
Hello @Dallastek1 , Explanation given by @livehybrid is all correct here based on ERROR messages and screenshot it appears that you are missing required permissions, also make sure that you have given permission as App based and not Delegated one as they are not correct form of permissions.
Firstly, join is not a good way to do things in Splunk - it has limitations and can almost always be avoided by using stats and your search pattern is not how you would combine search and lookup elem... See more...
Firstly, join is not a good way to do things in Splunk - it has limitations and can almost always be avoided by using stats and your search pattern is not how you would combine search and lookup elements. You are discarding Location and Responsible from the lookup because of your table statements line 3. Anyway, try this (I removed the unused elements in your search and the double sort in your join). index=servers sourcetype=logs | stats latest(_time) as Time by System_Name ``` Calculate time differences ``` | eval last_seen_ago_in_seconds = now() - Time | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" ``` Find the Location and Responsible ``` | lookup system_info.csv System as System_Name ``` Now render the results ``` | table System_Name Location Responsible MISSING | sort -last_seen_ago_in_seconds    
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the ... See more...
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the join is losing those values.   Is there a way to get those values to the final result? | inputlookup system_info.csv | eval System_Name=System | table System_Name | join type=left Sensor_Name [| search index=servers sourcetype=logs     | stats latest(_time) as Time by System_Name     | eval mytime=strftime(Time,"%Y-%m-%dT%H:%M:%S")     | sort Time asc | eval now_time = now()     | eval last_seen_ago_in_seconds = now_time - Time     | sort -last_seen_ago_in_seconds ] | stats values(*) as * by System_Name | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" | table System_Name Location Responsible MISSING
Two problems with your regex. \d represents a digit 0-9.  Unless your "string" only includes digits, \d+ will not match. As @livehybrid notes, your original string includes a pair of square bracke... See more...
Two problems with your regex. \d represents a digit 0-9.  Unless your "string" only includes digits, \d+ will not match. As @livehybrid notes, your original string includes a pair of square brackets. A usable code to extract "apple" from "This is my [apple]" would be | rex "This is my \[(?<string>[^\]]+)\]" | stats count by string Note: _raw is the default field for rex command. .* at beginning and end of a regex serves no purpose except adding cost.
In addition to the other comments, you don't need the .* at the start and end of the regex
Okay, I've followed the documentation for the kvstore upgrade and ensured I disabled it before as well as manually tried upgrading it still no luck. After removing mog and starting Splunk again I rec... See more...
Okay, I've followed the documentation for the kvstore upgrade and ensured I disabled it before as well as manually tried upgrading it still no luck. After removing mog and starting Splunk again I received the messages such a: ERROR while running splunk-preinstall "/opt/splunk/var/log/splunk"      
@GeneralBlack we might need to re-install the previous splunk version for this, best approach is to work with support. https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/admini... See more...
@GeneralBlack we might need to re-install the previous splunk version for this, best approach is to work with support. https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/administer-the-app-key-value-store/migrate-the-kv-store-storage-engine#Upgrade_KV_store_server_to_version_4.2 Try this go to login.splunk.com > support > support portal  > Need help? > Create a Case if this Helps, Please Upvote
"KVStore version upgrade precheck FAILED!" is the error I received