All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Which server.pem file did you delete? You should also run a btool to see what cert is being used: $SPLUNK_HOME/bin/splunk btool server list --debug | grep -i "ssl"
Hi @Narendra.Rao, Did you see the latest reply? Please let the community know if it helped by clicking "Accept as Solution" or continue the conversation. 
Just a heads up, this add-on has been archived and a new version of it exists: https://splunkbase.splunk.com/app/5435 That may be the issue. What is confusing is there aren't even any errors/warnin... See more...
Just a heads up, this add-on has been archived and a new version of it exists: https://splunkbase.splunk.com/app/5435 That may be the issue. What is confusing is there aren't even any errors/warnings or anything in the logs. What search where you using, and does anything stand out, like a 404/401 error or anything
Hi @Jose Luis.Sanchez, I was digging around and found that it could be a proxy issue. 
Is there a way to monitor disconnects on a host (with a deployed universal forwarder) that cannot reach the Indexer? We have an on prem solution. Simply trying to use this host to monitor if network ... See more...
Is there a way to monitor disconnects on a host (with a deployed universal forwarder) that cannot reach the Indexer? We have an on prem solution. Simply trying to use this host to monitor if network A can reach network B because the host is in Network A and the index is in network B.   
Hello @Raghul.R, Sorry for the late reply, just getting back to work from a holiday. What exactly do you need help with? 
Hi @Maximiliano.Salibe, It looks like since this was an older thread, the other members did not chime in. Since it has been a few days, did you happen to find a solution or anything you can share? ... See more...
Hi @Maximiliano.Salibe, It looks like since this was an older thread, the other members did not chime in. Since it has been a few days, did you happen to find a solution or anything you can share? If you still need help with this, you can contact Cisco AppDynamics Support. https://www.appdynamics.com/support
It depends on what data you have in your events and how they are linked. For example, is the ticket number unique to the ticket. Do subsequent events contain all the information from previous events ... See more...
It depends on what data you have in your events and how they are linked. For example, is the ticket number unique to the ticket. Do subsequent events contain all the information from previous events for the same ticket? Is the SLA fixed for all tickets or is there a way to determine that the SLA is from the ticket (via a lookup perhaps)? Please provide more detail, ideally some anonymised representative sample events so we can see what you are dealing with.
Hello All,    Im trying to use Splunk and Tableau and in order to do so I need to use the Splunk ODBC Driver. I've followed these instructions: https://docs.splunk.com/Documentation/ODBC/3.1.1/UseO... See more...
Hello All,    Im trying to use Splunk and Tableau and in order to do so I need to use the Splunk ODBC Driver. I've followed these instructions: https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/InstallationmacOS and downloaded the driver, however the driver only give options for MacOS 11.6. I've tried downloading that driver however the download error I get is "File wasn't available on site".  I'm wondering if anyone has any solutions I could try to download this driver    Thanks
Hi All, I have one set of output having 8 closed tickets for two consecutive months as a result of splunk query. I also need to check whether each one of them breached SLAs or not based on their lev... See more...
Hi All, I have one set of output having 8 closed tickets for two consecutive months as a result of splunk query. I also need to check whether each one of them breached SLAs or not based on their level of priority. How to traverse through each and every record through splunk query? Please Note: I also need to put in the formula to check which tickets got breached and what is the breach age and finally average age for breach of tickets. Please suggest how to proceed with this use case. 
Consider using a lookup table that maps the first two octets to a location.  If the lookup returns the same fields as the iplocation command then you could use the geostats command to display the dat... See more...
Consider using a lookup table that maps the first two octets to a location.  If the lookup returns the same fields as the iplocation command then you could use the geostats command to display the data on a map.  You probably would need to create a lookup definition and use the Advanced settings to define CIDR match on the address field.  The lookup might look something like this addr City Country Region lat lon 192.168.0.0/16 foo United States Texas xxx yyy 172.168.0.0/16 bar United States California aaa bbb
Hello, I need your help for something. I want to get a dropdown via using a result from a search with using js.     I want the dropdown to take the search result: index=_internal |stats c... See more...
Hello, I need your help for something. I want to get a dropdown via using a result from a search with using js.     I want the dropdown to take the search result: index=_internal |stats count by source |table source   Thank you so much    
Hello All,  I am installing Alert manager Enterprise on a standalone on-prem server. I can it indexed in a existing index or should I be using another new index for config.  Also what would be ... See more...
Hello All,  I am installing Alert manager Enterprise on a standalone on-prem server. I can it indexed in a existing index or should I be using another new index for config.  Also what would be the HEC host field, will it be my url for the splunk instance and what would be the HEC port as well.    In my understanding is that, alert manager takes splunk alerts and displays it, Im not sure why HEC is even used when setting this up?  Thank you for all the help! #
Thank you everyone for commenting. I have pre-defined location already based on the first two octets of the IP address schema. I thought there would be a way to identify location in that manner. Exam... See more...
Thank you everyone for commenting. I have pre-defined location already based on the first two octets of the IP address schema. I thought there would be a way to identify location in that manner. Example    Log in attempt from user1 from 192.168.x.x means they are coming from Texas  Log in attempt from user2 from 172.168.x.x mean they are coming from California.    Rember this are examples and I totally understand their local IP and geo tagging might not be possible since there internal IP. In this example we know the first two octas indicated California or Texas.  The idea is to have a dashboard for Linux users that shows a map of Authentication user taking place based on IP address. There is only two IP address scheme we are dealing with and only two locations in this example each corresponding to the location in the example 192.168.x.x is Texas and 172.168.x.x is California. Hope this helps:) Something like the below image:      
OK. Show us one of your 4624 events found in verbose mode (blur sensitive data if needed). BTW, looking at my 4624 events I don't see anything that should yield action=success extraction.
Ahhhh. yes. The usual confusion between Deployer and Deployment Server (I read "deploy server" as Deployer, you read it - probably good - as DS). This naming is confusing, especially for newbies.
The iplocation command doesn't work with internal IP addresses (192.128.x.x, 10.x.x.x, etc.).  That's because many companies use the same IP address space so a lookup by IP alone is not meaningful.  ... See more...
The iplocation command doesn't work with internal IP addresses (192.128.x.x, 10.x.x.x, etc.).  That's because many companies use the same IP address space so a lookup by IP alone is not meaningful.  Your company would have to create and install their own .mmdb file with the appropriate information.
Two or even three octets are insufficient to identify a location. What is it you are really trying to show?
I have a Linux Environment and SSH is a thing here. I need to show SSH log in with location. I got the map to work but know I need to figure out how to show the IP's based on two locations based on t... See more...
I have a Linux Environment and SSH is a thing here. I need to show SSH log in with location. I got the map to work but know I need to figure out how to show the IP's based on two locations based on the first two octets of the IP address schema.   Example: Texas: 192.168.x.x California: 172.16.x.x    index=Exampe_index "ssh" sourcetype="Example_audit" "res"=success type=USER_LOGIN hostname=*| iplocation addr | geostats latfield=lat longfield=lon count      
i am getting this error below regarding pass4SymmKey WARN HTTPAuthManager [1045 MainThread] - pass4SymmKey length is too short. See pass4SymmKey_minLength under the clustering stanza in server.conf ... See more...
i am getting this error below regarding pass4SymmKey WARN HTTPAuthManager [1045 MainThread] - pass4SymmKey length is too short. See pass4SymmKey_minLength under the clustering stanza in server.conf INFO ServerRoles [1045 MainThread] - Declared role=cluster_master. INFO ServerRoles [1045 MainThread] - Declared role=cluster_manager. ERROR ClusteringMgr [1045 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. ERROR loader [1045 MainThread] - clustering initialization failed; won't start splunkd what exactly the problem is ? i am defined the exact proper legth of pass4SymmKey , but still it is not working . below is the server.conf file ,  The server.conf file for updated version will look like below : [general] serverName = *** pass4SymmKey = generated_pass4SymmKey_value [sslConfig] sslPassword = *** description = ABCDEFGH peers = * quota = MAX stack_id = *** description = ABCDEFGH peers = * quota = MAX stack_id = forwarder [***:ABCDEFGH] description = ABCDEFGH peers = * quota = MAX stack_id = free [indexer_discovery] [clustering] cluster_label = *** mode = manager replication_factor = 3 search_factor = 2 pass4SymmKey_minLength = 32 what am i missing ?