All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this examp... See more...
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this example, I'd like the gauge to cap at 10,000 but it always shows 100.   
Rich, thanks for the clarification. The Splunk documentation is kinda confusing on this specific topic.  That is helpful, frustraiting and leaves me with even more questions. Now I have absolutely n... See more...
Rich, thanks for the clarification. The Splunk documentation is kinda confusing on this specific topic.  That is helpful, frustraiting and leaves me with even more questions. Now I have absolutely no idea why we have logs 3 years older than the retentions are set to. There is nothing set up to freeze anything so it should all be rolling out as it hits that 5.1 year mark. Would homePath.maxDataSizeMB override? I thought that was the cut off to roll warm to cold and shouldn't affect it.  The only limits set are: maxTotalDataSizeMB = 1000000000 #1,000TB homePath.maxDataSizeMB = 500000 #500GB frozenTimePeriodInSecs = 160833600 (5.1 years) maxDataSize = 2000 maxWarmDBCount = 2000
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for ... See more...
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for a way to get JIRA->Splunk data in whenever there is a change in the issue or just able to query all the issues in JIRA via splunk and pull back stats 
Try something like this | metadata type=sources where index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file
When you've got ES control, but have to file a ticket that will take months to respond to from a Splunk core admin team for data issues, sometimes you just do what you gotta do.
@richgalloway  I checked the dbx_settings.conf file and I see the right java path in the file but I am still seeing the same error.  I have tried reinstalling the dbconnect app with no luck.  ... See more...
@richgalloway  I checked the dbx_settings.conf file and I see the right java path in the file but I am still seeing the same error.  I have tried reinstalling the dbconnect app with no luck.  also tried searching for the string "/bin/bin" in the splunk db_connect app path with no luck showing the string in any file.     
@N_K  You can make an action block loop through a list of parameters with the right input from a format block. With the HTTP app it may be harder to do as there are a lot of potential parameters.  ... See more...
@N_K  You can make an action block loop through a list of parameters with the right input from a format block. With the HTTP app it may be harder to do as there are a lot of potential parameters.  Yeah, please don't try to use requests outside of an app space   Depending what you are using the HTTP app for it may be best to build an app to handle it as you get a lot more control over the behaviour and the HTTP app, IMO, is usually only useful to test interactions with external APIs OR simple HTTP related tasks.  How many parameters are dynamic when using the HTTP app? 
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it t... See more...
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it takes 30-60 seconds to generate it.   Do you have an idea how to simplify it ? Or write in more efficient way ?  
@phanTom Thanks for the reply. Unfortunately the input playbook contains a http app block. I've tried to just make the request in a code block using requests but am running into proxy errors, works f... See more...
@phanTom Thanks for the reply. Unfortunately the input playbook contains a http app block. I've tried to just make the request in a code block using requests but am running into proxy errors, works fine when I use the app.
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, ... See more...
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, use the show kvstore-status command: ./splunk show kvstore-status When I run this command, it's asking me for a splunk username and password.  this was handed over by a project team, but nothing was handed over about what the splunk password might be, or also if we actually  use a KV store.  I've tried the admin password, but that's not worked. I've found some splunk documents advising the KV store config would be in $SPLUNK_HOME/etc/system/local/server.conf, under [kvstore] There is nothing in our server.conf under kvstore. I've also found some notes talking about KVStore not starting if there's a $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock file present We have 2 splunk servers - one of these has a lock file dated Oct 2022, and the other dated July 19th.  So based on this, I suspect it's not used otherwise we'd have hit issues with it before? That's just a guess, but this is my first foray into splunk, so I thought I'd ask if, based on the above scenarios whether I need to back up the KV store or not, or are there any other checks to confirm definitively if we have a KV store that's used? thanks in advance  
"_time is _the_ most important field " is precisely why we don't want to use the DATETIME_CONFIG=current solution. We are still using the _time, would be nice to use it together with _indextime. We ... See more...
"_time is _the_ most important field " is precisely why we don't want to use the DATETIME_CONFIG=current solution. We are still using the _time, would be nice to use it together with _indextime. We are operating at a scale too large to be fixing clocks.  When a misconfiguration is intended, we first have to catch it. We have to "account for lagging sources with our searches", which means very large time windows. Plus missing data in case of outages, so have to replay those searches to cover the outage timeframes. In any case, we are used to Splunk products being restrictive and making a lot of assumptions on how the customers should use it. We are working around exactly as you described it, just would be nice to have more options.
Only certain sourcetypes supported by the TA map to CIM datamodels.  The list is at https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Sourcetypes If you don't see what you need then yo... See more...
Only certain sourcetypes supported by the TA map to CIM datamodels.  The list is at https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Sourcetypes If you don't see what you need then you may need to add local aliases, etc. to the TA.
A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk ha... See more...
A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk has ingested them? Can you do a search for a string that exists in your logs across all you indexes and find any responsive logs, in the time you verified that the data was ingested?   Also, for syslog data in general it is simpler and more durable to forward data to a syslog server and have a UF monitor relevant files and then you set-up monitoring stanzas per host/data source. [monitor://var/log...whatever] whitelist = regex blacklist = regex host_segment = as needed crcSalt = <SOURCE> {as needed} sourcetype = syslog {or whatever you want} index = yourIndex   Consult also: How the Splunk platform handles syslog data over the UDP network protocol - Splunk Documentation
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began... See more...
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began to be sent to the index "_internal". Splunk version is 7.3.2..
I'm just saying you're putting the cart before the horse. You know _now_ that something happened. When? 10 minutes ago? 10 hours ago? 10 days ago? Do you know if you should react immediately and - fo... See more...
I'm just saying you're putting the cart before the horse. You know _now_ that something happened. When? 10 minutes ago? 10 hours ago? 10 days ago? Do you know if you should react immediately and - for example isolate the workstation to prevent the threat from spreading in your infrastructure or whether you should rather focus on searching where it already spread to? You're trying to solve a different problem than you have. If you have sources which can be lagging, you should acount for that with your searches so you don't have situations where you miss the data because it came and got indexed outside of your search window. But that's different than just happily letting your clocks run loose and try to guess your way around it. IMHO you're simply solving wrong problem. But of course YMMV. EDIT: Oh, and of course you have the possibility of using _indextime in your searches. It's just that _time is _the_ most important field about the event. PS: If you think _indextime is the most important field for you, just use DATETIME_CONFIG=current and be done with it.
Exactly what I needed. Thanks!
I can't, but at least I can catch that event with index time, correlate it with other security events and analyze to see a bigger picture. Things like that are expected in security world, and it's be... See more...
I can't, but at least I can catch that event with index time, correlate it with other security events and analyze to see a bigger picture. Things like that are expected in security world, and it's better to catch them with unreliable time than miss them. Being able to tell when something happened is not as critical as being able to tell it happened.  Missing such events may mean a lot of damage to the company. We are not asking for ES to be a time synchronization tool, but simply allowing to search on _indextime and _time would be incredibly useful. 
There are pros and cons of everything of course. But ES can't be - for example - a substitute for reliable time source and proper time synchronization. That's not what it's for. If you don't have a ... See more...
There are pros and cons of everything of course. But ES can't be - for example - a substitute for reliable time source and proper time synchronization. That's not what it's for. If you don't have a reliable time, how can you tell when the event really happened? If you have a case when the clock can be set to absolutely anything so you have to search All-Time, how can you tell when the event happened (not when it was reported)?
You are going to miss data if you are using event time for security alerting. Event time stamps are unreliable. We have seen event times 2 years in the future due to system clocks misconfigurations. ... See more...
You are going to miss data if you are using event time for security alerting. Event time stamps are unreliable. We have seen event times 2 years in the future due to system clocks misconfigurations. Event delays and outages are common. Our average delay is 20 minutes, SLA for delivery is 24 hours.  If we want to run security alerting every hour to reduce the dwell time, we have to look back 24 hours instead of 1 hour. If we are running over 1K security searches, that adds up. On top of that, always a chance of missing a misconfigured clock unless we check AllTime. Using the _indextime for alerting, and event time for analyzing the events would work perfect for our use case. Unfortunately, it seems to be not feasible with all the constraints in ES, so we have to run our searches for a very large time span to make sure we account for the event delays, we have to check future times, and we have to have an outage replay protocols. Very inconvenient, I wish we could just run searches on _indextime (every hour) with a  broader _time (24 hours) (not AllTime).   
Thanks everyone I will go through the upgrade route then.  It will be safer that way.