All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@a1bg503461  The error highlights that, opt_ca_certs_path is not defined in your configuration Are you using SSL/TLS with Elasticsearch? If yes Make sure you mention your .crt path in your config ... See more...
@a1bg503461  The error highlights that, opt_ca_certs_path is not defined in your configuration Are you using SSL/TLS with Elasticsearch? If yes Make sure you mention your .crt path in your config Eg: opt_ca_certs_path = /path/to/your/ca.crt If you are not using SSL/TLS, then try below in your config use_ssl = 0 # opt_ca_certs_path = Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@cdevoe57  As mentioned by @bowesmana  Its not best to use join, as it can sometimes cause fields from lookup to be lost. but anyway can you try below if you still want to use join, | inputlookup ... See more...
@cdevoe57  As mentioned by @bowesmana  Its not best to use join, as it can sometimes cause fields from lookup to be lost. but anyway can you try below if you still want to use join, | inputlookup system_info.csv | eval System_Name=System | join type=left System_Name [ | search index=servers sourcetype=logs | stats latest(_time) as Time by System_Name | eval mytime=strftime(Time,"%Y-%m-%dT%H:%M:%S") | eval now_time = now() | eval last_seen_ago_in_seconds = now_time - Time ] | stats values(*) as * by System_Name | lookup system_info.csv System_Name OUTPUT Location Responsible | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" | table System_Name Location Responsible MISSING or you can also try and check below, without join. index=servers sourcetype=logs | stats latest(_time) as Time by System_Name | eval last_seen_ago_in_seconds = now() - Time | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200, "MISSING", "GOOD") | where MISSING=="MISSING" | lookup system_info.csv System_Name OUTPUT Location Responsible | table System_Name Location Responsible MISSING last_seen_ago_in_seconds | sort -last_seen_ago_in_seconds Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@caschmid  \d+ matches only digits, not any word. If "This is my" is always constant, you can try below rex field=_raw "This is my (?<string>\w+)" | stats count by string Regards, Prewin Splu... See more...
@caschmid  \d+ matches only digits, not any word. If "This is my" is always constant, you can try below rex field=_raw "This is my (?<string>\w+)" | stats count by string Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
So let me start by saying I've been struggling with these lookup commands. My objective here is to use the lookup as it contains all known servers to find the servers that are not logging.  Includ... See more...
So let me start by saying I've been struggling with these lookup commands. My objective here is to use the lookup as it contains all known servers to find the servers that are not logging.  Including those that have yet to log.  This modified query now gives me the other fields.   However, the results are wrong. In the ned I need to  get the list of servers in the lookup that are not in the query results (index=servers sourcetype=logs)
Hello @heathramos ,  From the DNS Server you can find possible ERROR logs around issue by going to $SPLUNK_HOME/var/log/splunk and search for file named streamfwd.log please check the ERROR and sh... See more...
Hello @heathramos ,  From the DNS Server you can find possible ERROR logs around issue by going to $SPLUNK_HOME/var/log/splunk and search for file named streamfwd.log please check the ERROR and share here so we can help you with possible things.
Hello @Dallastek1 , Explanation given by @livehybrid is all correct here based on ERROR messages and screenshot it appears that you are missing required permissions, also make sure that you have give... See more...
Hello @Dallastek1 , Explanation given by @livehybrid is all correct here based on ERROR messages and screenshot it appears that you are missing required permissions, also make sure that you have given permission as App based and not Delegated one as they are not correct form of permissions.
Firstly, join is not a good way to do things in Splunk - it has limitations and can almost always be avoided by using stats and your search pattern is not how you would combine search and lookup elem... See more...
Firstly, join is not a good way to do things in Splunk - it has limitations and can almost always be avoided by using stats and your search pattern is not how you would combine search and lookup elements. You are discarding Location and Responsible from the lookup because of your table statements line 3. Anyway, try this (I removed the unused elements in your search and the double sort in your join). index=servers sourcetype=logs | stats latest(_time) as Time by System_Name ``` Calculate time differences ``` | eval last_seen_ago_in_seconds = now() - Time | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" ``` Find the Location and Responsible ``` | lookup system_info.csv System as System_Name ``` Now render the results ``` | table System_Name Location Responsible MISSING | sort -last_seen_ago_in_seconds    
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the ... See more...
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the join is losing those values.   Is there a way to get those values to the final result? | inputlookup system_info.csv | eval System_Name=System | table System_Name | join type=left Sensor_Name [| search index=servers sourcetype=logs     | stats latest(_time) as Time by System_Name     | eval mytime=strftime(Time,"%Y-%m-%dT%H:%M:%S")     | sort Time asc | eval now_time = now()     | eval last_seen_ago_in_seconds = now_time - Time     | sort -last_seen_ago_in_seconds ] | stats values(*) as * by System_Name | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" | table System_Name Location Responsible MISSING
Two problems with your regex. \d represents a digit 0-9.  Unless your "string" only includes digits, \d+ will not match. As @livehybrid notes, your original string includes a pair of square bracke... See more...
Two problems with your regex. \d represents a digit 0-9.  Unless your "string" only includes digits, \d+ will not match. As @livehybrid notes, your original string includes a pair of square brackets. A usable code to extract "apple" from "This is my [apple]" would be | rex "This is my \[(?<string>[^\]]+)\]" | stats count by string Note: _raw is the default field for rex command. .* at beginning and end of a regex serves no purpose except adding cost.
In addition to the other comments, you don't need the .* at the start and end of the regex
Okay, I've followed the documentation for the kvstore upgrade and ensured I disabled it before as well as manually tried upgrading it still no luck. After removing mog and starting Splunk again I rec... See more...
Okay, I've followed the documentation for the kvstore upgrade and ensured I disabled it before as well as manually tried upgrading it still no luck. After removing mog and starting Splunk again I received the messages such a: ERROR while running splunk-preinstall "/opt/splunk/var/log/splunk"      
@GeneralBlack we might need to re-install the previous splunk version for this, best approach is to work with support. https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/admini... See more...
@GeneralBlack we might need to re-install the previous splunk version for this, best approach is to work with support. https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/administer-the-app-key-value-store/migrate-the-kv-store-storage-engine#Upgrade_KV_store_server_to_version_4.2 Try this go to login.splunk.com > support > support portal  > Need help? > Create a Case if this Helps, Please Upvote
"KVStore version upgrade precheck FAILED!" is the error I received
Hello Sainag I've tried calling Splunk customer support and keep getting thwarted in circles via the automated calling system. I've watched multiple tutorials and even some specifically given by Splu... See more...
Hello Sainag I've tried calling Splunk customer support and keep getting thwarted in circles via the automated calling system. I've watched multiple tutorials and even some specifically given by Splunk and still no luck.
Hi @GeneralBlack  Please could you share the full error you are getting?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it... See more...
Hi @GeneralBlack  Please could you share the full error you are getting?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @caschmid  Would something like this work for you? This assumes you know the string you want count, is that right?   | rex max_match=100 field=_raw "(?<extract>\[string\])" | stats count by... See more...
Hi @caschmid  Would something like this work for you? This assumes you know the string you want count, is that right?   | rex max_match=100 field=_raw "(?<extract>\[string\])" | stats count by extract  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Use https://regex101.com to verify your regexes. In this case it won't work for "string" not being a number because \d+ means a sequence of digits. Depending on how precise you want to be with this ... See more...
Use https://regex101.com to verify your regexes. In this case it won't work for "string" not being a number because \d+ means a sequence of digits. Depending on how precise you want to be with this match, you might want \S+ or some other variation.
I can't propose any solution because I have no idea where the problem is. I don't even know which endpoint you're using. The remark about line breaking is just something worth knowing.
Ok. So it seems you're not just filling down because at the end you're substracting from what's already been counted. There is much more logic here. Are there any limitations to the versions per day?... See more...
Ok. So it seems you're not just filling down because at the end you're substracting from what's already been counted. There is much more logic here. Are there any limitations to the versions per day? What if there are more than two versions? It seems much more complicated.
@GeneralBlack Please work with splunk support, may be its missing the the mongod folder and it was not created after upgrade?