All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @isoutamo , that's pretty much what I am looking. I don't know if that a version issue/bug but only difference I can find between the working install and non working install is the linux kern... See more...
Thanks @isoutamo , that's pretty much what I am looking. I don't know if that a version issue/bug but only difference I can find between the working install and non working install is the linux kernel that I am use. kernel version DB connect version - Not Working  DB connect version - working 3.10.0 3.17.0 / 3.18.0  3.16.0 5.14.0 - 3.17.0 / 3.18.0 /3.16.0   Sometimes, I also have the issue where the app gets launched but doesn't the configuration page remains empty or doesn't move forward past a point. Is this a common issue with Splunk DB Connect as well ?   Regards, Pravin
In Splunk, sourcetype basically means a lexical format of log event. So if those events are different in format point of view when severity differ then it's ok and correct way to create a separate so... See more...
In Splunk, sourcetype basically means a lexical format of log event. So if those events are different in format point of view when severity differ then it's ok and correct way to create a separate sourcetypes for those. But if all events have basically same format independent of severity, but you are using different fields based on it then this is not making sense. If you still want to do separate sourcetypes, then you must definitelly document why you have done this kind of solutions. Otherwise the next administrator will be quite confused about it. Let's hope that this TA or some other TA will help you to do best solution for this case!
@PickleRick , sorry I am not sure I fully understand. May I know where are we using the index_2 at all in the query? Also, if I have to form the dummy data, would I not rather have two CSVs - one fo... See more...
@PickleRick , sorry I am not sure I fully understand. May I know where are we using the index_2 at all in the query? Also, if I have to form the dummy data, would I not rather have two CSVs - one for the index_1 data and the other for index_2 data? Btw, I tried to run the query, I am not getting the data in the tabular format. Adding this - table index1Id, curEventOrigin, curEventId, prevEventOrigin, prevEventId to the end of your query didn't help. Thanks Ravi
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourcetype. good for the add-on I will try to use... See more...
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourcetype. good for the add-on I will try to use it, and for the actual writing format I have written by using capital / uppercase letters according to the doc.
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourctype. good for the add-on I will try to use ... See more...
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourctype. good for the add-on I will try to use it, and for the actual writing format I have written by using capital / uppercase letters according to the doc.
Please tell the solution if you are marking this as a solved! In that way other can see it and use a same solution too.
Then performance can be an issue with it? Basically this must be capable of serving full speed of sum of your indexer nodes' (storage)interfaces speed/capacity + all other sources which are using it. ... See more...
Then performance can be an issue with it? Basically this must be capable of serving full speed of sum of your indexer nodes' (storage)interfaces speed/capacity + all other sources which are using it. You definitely need to do a performance tests with it before you take it into use! With smartstore there are all other buckets in use than hot buckets. This means that time by time splunk want to get all of those into your node's caches within short period of time. And this needs lots of network capacity at same time. And remember that you cannot go back from smartstore to traditional server storage without reindexing that data. There is no supported way to convert back from S2 to local storage!
Our private cloud...
Hi why you want to separate different severities to different source types? As @gcusello already said this is not the Spunk Way.  Have you already check this TA https://splunkbase.splunk.com/app/16... See more...
Hi why you want to separate different severities to different source types? As @gcusello already said this is not the Spunk Way.  Have you already check this TA https://splunkbase.splunk.com/app/1620 The Splunk Add-on for Cisco ASA? Usually it's best to use those if possible. Then you also get your input as CIM compliant and can use those with other Apps much easier. Also it's easier to found issues and create alerts with CIM compliant sources. If you must use your own configuraations then look this https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf to do those transforms. At at least you have lowercase format instead of correct uppercase version FORMAT. All those attribute name must be exactly as specs said. r. Ismo
I have like 70% modular input, 25% forwarded, and 5% other (scripted, HEC) Cold and Frozen are on S3. In the last 5 years, we do not need to recall any Frozen data, so this is not really important. ... See more...
I have like 70% modular input, 25% forwarded, and 5% other (scripted, HEC) Cold and Frozen are on S3. In the last 5 years, we do not need to recall any Frozen data, so this is not really important. ( I will cross this river whenever needed :)) What is important is around 90-120 days of historical, searchable data. So I should move it from the cold back after everything is set up or I just wait until it's outdated, but keep the old server to search them... "And withSmartStore, especially in on-prem, you must ensure and test that you have enough throughput between nodes and S3 storage! " Exactly that is what we are checking now. We can have 10G, but this is just theoretical because dedicated 10G is not possible...  
I still have the same issue. I tried with version 3.17.0 / 3.18.0 had the same issue. I am using Splunk 9.2.2 and DB connect 3.16.0 worked fine, is this a version issue ?
Here is link to Splunk security announcements https://advisory.splunk.com/?301=/en_us/product-security.html r. Ismo
You said S3, but is this AWS S3 or some other S3ish from other vendor?
We have S3. Currently, we are using NFS bridge to mount to the server and send the cold buckets there. It planned to change to SmarStore.
Hi @Mallika1217 , as also @inventsekar  said, you don't need a LinkedIn account to access Splunk downloads, you have only to register your account on Splunk site and then you can download all Splunk... See more...
Hi @Mallika1217 , as also @inventsekar  said, you don't need a LinkedIn account to access Splunk downloads, you have only to register your account on Splunk site and then you can download all Splunk updates and apps (except Premium Apps) you want (Get a Splunk.com Account | Splunk). Ciao. Giuseppe
Hi @fabiyogo , why you should use different sourcetypes for different severity levels? All the Splunk parsing rules are usually related to sourcetype, this means that using three sourcetypes, you h... See more...
Hi @fabiyogo , why you should use different sourcetypes for different severity levels? All the Splunk parsing rules are usually related to sourcetype, this means that using three sourcetypes, you have to create more parsing rules for the same data. Instead you could use the same sourcetype (so you can create only one set of parsing rules) and tag different severity events using eventtypes and tags. Ciao. Giuseppe
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the result... See more...
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the results in 2 levels, however, the second level expansion will get exit abruptly.    Also, we could notice that the pagination only works in the first level table (Inner child table row expansion) for the initial row we select and only once.  If we select the second row/entry in the same parent table, the Inner child table pagination will be in a freeze state. We need to reload the dashboard everytime to fix this. 
Wait a second. "We have our virtual environment and S3 as well" - does that mean that you're using smartstore or this S3 is unrelated to Splunk?
Why you want to put those HFs between sources and indexers? Usually it's better without those? Almost only reason why you need those is that there is security policy which needs isolated security zon... See more...
Why you want to put those HFs between sources and indexers? Usually it's better without those? Almost only reason why you need those is that there is security policy which needs isolated security zones and you must use IHFs as gateways/proxies between those zones. Or was it so, that currently you have some modular inputs or other is this standalone instance? In that case your plan is correct. You should set up needed amount of HFs to handle those, but just for manage those inputs. Please remember that almost all inputs are not HA aware and you cannot run those parallel in several HFs at same time. Are those buckets just frozen storage where you get those if needed and just thawed those into use or are those already used as smart store storage? If I understand right 1st option is currently in use? If so, then you just keep those as currently or put those into some other storage. If I recall right you cannot restore (thawed) those into smartstore enabled cluster index?  Anyhow as those are standalone bucket I just propose to use individual all in one box or indexer to restore those if/when needed. Other time that box can be down. And with smartstore, especially in onprem, you must ensure and test that you have enough throughput between nodes and S3 storage! 
Hello, I am looking to configure POST request using webhook as an Alert action.But i can't see any authentication  How i add authentication in webhook