All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am using Splunk 9.2.2 Sorry that I forgot to mention the version of Splunk in the earlier version.  
Thanks @isoutamo .  The raw data contains some backslashes already:  \"TOPIC_COMPLETION\" So I had to perform my seach like this: index="..." "08:29:41.630" AND \\\"TOPIC_COMPLETION\\\" Now it's... See more...
Thanks @isoutamo .  The raw data contains some backslashes already:  \"TOPIC_COMPLETION\" So I had to perform my seach like this: index="..." "08:29:41.630" AND \\\"TOPIC_COMPLETION\\\" Now it's working properly. 
The mockup data contains events from both index1 and index2 (the first column of the dummy data). It is assumed to be an equivalent of searching over (index=index1 OR index=index2). Did you copy-pa... See more...
The mockup data contains events from both index1 and index2 (the first column of the dummy data). It is assumed to be an equivalent of searching over (index=index1 OR index=index2). Did you copy-paste my example search raw or did you modify it? And which Splunk version are you using?
Hi here is how this should work, but as there are some "magic" how those events are in buckets it's not as simple and doable as you could expect 2.1) Just query from that index if there is event... See more...
Hi here is how this should work, but as there are some "magic" how those events are in buckets it's not as simple and doable as you could expect 2.1) Just query from that index if there is events before that retention time. But quite probably there are some events still. The reason for that is that smallest storage artifact/objet is bucket not an individual event. And one bucket can contain events from very large time span. 2.2) This is totally dependent of amount of data, your instances sizes and other resource aspects which always depends. 2.3) You could look from MC (monitoring console) Settings -> MC -> Indexing. -> Indexes and Volumes -> Index Detail: Deployment. This dashboard shows that information. r. Ismo
Hi you haven't add any SPL for query there. You could check and use Splunk Dashboard example app https://splunkbase.splunk.com/app/1603 for creating your own dashboards. <input type="m... See more...
Hi you haven't add any SPL for query there. You could check and use Splunk Dashboard example app https://splunkbase.splunk.com/app/1603 for creating your own dashboards. <input type="multiselect" token="sourcetype_token" searchWhenChanged="true"> <default>splunkd, splunk_web_service, splunkd_access</default> <!-- The final value will be surrounded by prefix and suffix --> <prefix>(</prefix> <suffix>)</suffix> <!-- Each value will be surrounded by the valuePrefix and valueSuffix --> <valuePrefix>sourcetype="</valuePrefix> <valueSuffix>"</valueSuffix> <!-- All the values and their valuePrefix and valueSuffix will be concatenated together with the delimiter between them --> <delimiter> OR </delimiter> <choice value="*">ALL</choice> <fieldForLabel>sourcetype</fieldForLabel> <fieldForValue>sourcetype</fieldForValue> <search> <query>index=_internal | stats count by sourcetype</query> <earliest>0</earliest> </search> </input> Just add/modify <search><query>....</query></search> part into your form. r. Ismo
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> ... See more...
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> <choice value="cfp07">p07</choice> <choice value="*">ALL</choice> <default>*</default> <valuePrefix>environment =</valuePrefix> <delimiter> OR </delimiter> <search> <query/> </search> <fieldForLabel>environment</fieldForLabel> <fieldForValue>environment</fieldForValue> </input>
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index conta... See more...
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index containing 1 year of data. 2. Key Considerations: - How can we verify that the deletion of 1 month of data from Splunk indexes is successful? - How long does Splunk typically take to delete this amount of data from the indexes? - Is there a way to monitor or observe the deletion of old buckets or data using the Splunk UI (via SPL queries)?   Thanks in advance!!  
If you are using Splunk 9.x then linux kernel 3.1 support has removed https://docs.splunk.com/Documentation/Splunk/9.3.1/ReleaseNotes/Deprecatedfeatures
Thanks @isoutamo , that's pretty much what I am looking. I don't know if that a version issue/bug but only difference I can find between the working install and non working install is the linux kern... See more...
Thanks @isoutamo , that's pretty much what I am looking. I don't know if that a version issue/bug but only difference I can find between the working install and non working install is the linux kernel that I am use. kernel version DB connect version - Not Working  DB connect version - working 3.10.0 3.17.0 / 3.18.0  3.16.0 5.14.0 - 3.17.0 / 3.18.0 /3.16.0   Sometimes, I also have the issue where the app gets launched but doesn't the configuration page remains empty or doesn't move forward past a point. Is this a common issue with Splunk DB Connect as well ?   Regards, Pravin
In Splunk, sourcetype basically means a lexical format of log event. So if those events are different in format point of view when severity differ then it's ok and correct way to create a separate so... See more...
In Splunk, sourcetype basically means a lexical format of log event. So if those events are different in format point of view when severity differ then it's ok and correct way to create a separate sourcetypes for those. But if all events have basically same format independent of severity, but you are using different fields based on it then this is not making sense. If you still want to do separate sourcetypes, then you must definitelly document why you have done this kind of solutions. Otherwise the next administrator will be quite confused about it. Let's hope that this TA or some other TA will help you to do best solution for this case!
@PickleRick , sorry I am not sure I fully understand. May I know where are we using the index_2 at all in the query? Also, if I have to form the dummy data, would I not rather have two CSVs - one fo... See more...
@PickleRick , sorry I am not sure I fully understand. May I know where are we using the index_2 at all in the query? Also, if I have to form the dummy data, would I not rather have two CSVs - one for the index_1 data and the other for index_2 data? Btw, I tried to run the query, I am not getting the data in the tabular format. Adding this - table index1Id, curEventOrigin, curEventId, prevEventOrigin, prevEventId to the end of your query didn't help. Thanks Ravi
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourcetype. good for the add-on I will try to use... See more...
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourcetype. good for the add-on I will try to use it, and for the actual writing format I have written by using capital / uppercase letters according to the doc.
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourctype. good for the add-on I will try to use ... See more...
because each log with a different severity level has a different log, therefore to make it easier to parse the fields, I want to differentiate using sourctype. good for the add-on I will try to use it, and for the actual writing format I have written by using capital / uppercase letters according to the doc.
Please tell the solution if you are marking this as a solved! In that way other can see it and use a same solution too.
Then performance can be an issue with it? Basically this must be capable of serving full speed of sum of your indexer nodes' (storage)interfaces speed/capacity + all other sources which are using it. ... See more...
Then performance can be an issue with it? Basically this must be capable of serving full speed of sum of your indexer nodes' (storage)interfaces speed/capacity + all other sources which are using it. You definitely need to do a performance tests with it before you take it into use! With smartstore there are all other buckets in use than hot buckets. This means that time by time splunk want to get all of those into your node's caches within short period of time. And this needs lots of network capacity at same time. And remember that you cannot go back from smartstore to traditional server storage without reindexing that data. There is no supported way to convert back from S2 to local storage!
Our private cloud...
Hi why you want to separate different severities to different source types? As @gcusello already said this is not the Spunk Way.  Have you already check this TA https://splunkbase.splunk.com/app/16... See more...
Hi why you want to separate different severities to different source types? As @gcusello already said this is not the Spunk Way.  Have you already check this TA https://splunkbase.splunk.com/app/1620 The Splunk Add-on for Cisco ASA? Usually it's best to use those if possible. Then you also get your input as CIM compliant and can use those with other Apps much easier. Also it's easier to found issues and create alerts with CIM compliant sources. If you must use your own configuraations then look this https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf to do those transforms. At at least you have lowercase format instead of correct uppercase version FORMAT. All those attribute name must be exactly as specs said. r. Ismo
I have like 70% modular input, 25% forwarded, and 5% other (scripted, HEC) Cold and Frozen are on S3. In the last 5 years, we do not need to recall any Frozen data, so this is not really important. ... See more...
I have like 70% modular input, 25% forwarded, and 5% other (scripted, HEC) Cold and Frozen are on S3. In the last 5 years, we do not need to recall any Frozen data, so this is not really important. ( I will cross this river whenever needed :)) What is important is around 90-120 days of historical, searchable data. So I should move it from the cold back after everything is set up or I just wait until it's outdated, but keep the old server to search them... "And withSmartStore, especially in on-prem, you must ensure and test that you have enough throughput between nodes and S3 storage! " Exactly that is what we are checking now. We can have 10G, but this is just theoretical because dedicated 10G is not possible...  
I still have the same issue. I tried with version 3.17.0 / 3.18.0 had the same issue. I am using Splunk 9.2.2 and DB connect 3.16.0 worked fine, is this a version issue ?
Here is link to Splunk security announcements https://advisory.splunk.com/?301=/en_us/product-security.html r. Ismo