All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to set two token at the same time when a value is selected from a dropdown input. I have dynamic drop down input. Let's say possible values are A,B and C in the drop down. If A is selected I w... See more...
I want to set two token at the same time when a value is selected from a dropdown input. I have dynamic drop down input. Let's say possible values are A,B and C in the drop down. If A is selected I want to set two token  token1=X and token2=Y. And I will use those token in different panel. Tried below but one token is not getting the value  <change> <condition value="A"> <set token="T1">myRegex1(X)</set> </condition> </change> <change> <condition value="A"> <set token="T2">myRegex2(Y)</set> </condition> </change> Thanks
@isoutamo I am using only time selector in the dashboard and code for the time selector as mentioned below. <input id="time" type="time" token="time" depends="$operational_start_time$, $operationa... See more...
@isoutamo I am using only time selector in the dashboard and code for the time selector as mentioned below. <input id="time" type="time" token="time" depends="$operational_start_time$, $operational_end_time$" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum('earliest'),'earliest'+($operational_start_time$*3600),relative_time(now(),'earliest')))+($operational_start_time$*3600)</eval> <eval token="time.latest_epoch">if('earliest'="",0,if(isnum('earliest'),'earliest'+(86400)+($operational_start_time$*3600),relative_time(now(),'earliest')))+(86400)+($operational_start_time$*3600)</eval> </change>
Hello Splunkers !! How can I efficiently use the mvexpand command to expand multiple multi-value fields, considering its high resource consumption and expensive command? Please guide me  
With a racetrack which is typically into single kilometers across you shouldn't need to go to such lengths. Just use calculate approximate distance per degree of longitude at your latitude to calcula... See more...
With a racetrack which is typically into single kilometers across you shouldn't need to go to such lengths. Just use calculate approximate distance per degree of longitude at your latitude to calculate normal cartesian plane distance. Unfortunately the match versus lookup is only done for data treated as text so you can't easily do the "shortest distance" match. You'd still need to calculate the difference for all points anyway. Depending on the number of data points both to match and to match against there could be multiple approaches. You could do simple X by Y calculation of all distances and find the nearest points, you could do something akin to @bowesmana 's solution - create narrowing bounding boxes to pre-select points for fine calculations.
@gcusello Here you are, i hope it works.  This is the Message: {"Module": SplunkTest""Microflow": ACT_Omnext_Create_Test""latesterror_message": "401: Access Denied   at SplunkTest.ACT_Omnext_Create... See more...
@gcusello Here you are, i hope it works.  This is the Message: {"Module": SplunkTest""Microflow": ACT_Omnext_Create_Test""latesterror_message": "401: Access Denied   at SplunkTest.ACT_Omnext_Create_TEST (CallRest : 'Call REST (POST)') Advanced stacktrace:"http_status": "401"http_response_content": "{ "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." }"http_reasonphrase": "Access Denied"session_id": "912651c4-127f-4f02-a348-c79373e84444}   What i want is  app: application_name: env: environment_id: hostname: instance index level: ERROR  Module: SplunkTest Microflow:ACT_Omnext_Create_Test latesterror_message: 401: Access Denied at SplunkTest.ACT_Omnext_Create_TEST (CallRest : 'Call REST (POST)') http_status: 401 http_response_content: "{ "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." } http_reasonphrase: Access Denied session_id: "1111111-127f-4f02-a348-c79373e86a5d}        
What do you mean by "Saas" here? It's either a Splunk Enterprise instance (either administered by you or a third party) or a Splunk Cloud service subscription.
Won't work. As you can see in the spec - it doesn't work on the event level but on the file's mtime. It is a setting for this particular input type and doesn't make sense in another context. BTW, if... See more...
Won't work. As you can see in the spec - it doesn't work on the event level but on the file's mtime. It is a setting for this particular input type and doesn't make sense in another context. BTW, if you had files created sufficiently long time ago but containing events with present timestamps, it still wouldn't ingest those files.
Hi @Emre , you should create some field extractions using regexes from the message field. If you can share a sample of your data in text format (not screenshot), we can help you. Ciao. Giuseppe
MAX_DAYS_AGO doesn't cut it. It will make Splunk still index the events but it will assume that the timestamp parsed from the event was wrong so it would just assume another timestamp (whatever that ... See more...
MAX_DAYS_AGO doesn't cut it. It will make Splunk still index the events but it will assume that the timestamp parsed from the event was wrong so it would just assume another timestamp (whatever that would effectively be). @Andre_ What you could to to prevent some data from being indexed could be to add whitelists/blacklists in inputs matching certain timestamp values. That's a relatively ugly solution and is not something to be kept forever but for a start - might be the way to go. Just be careful because you're doing it differently when you're windows logs as "classic" and differently while it's XML.
You need to be a bit more precise about the requirements but generally it indeed looks like a case for proper sorting data and using dedup so that it only "catches" the first result for any given com... See more...
You need to be a bit more precise about the requirements but generally it indeed looks like a case for proper sorting data and using dedup so that it only "catches" the first result for any given combination of fields.
@Emre  You can use Splunk’s Field Extractions (props/transforms) or rex in your SPL to extract fields at search time For Eg: | rex field=_raw "Module:(?<Module>[^\n]+)" | rex field=_raw "Microflo... See more...
@Emre  You can use Splunk’s Field Extractions (props/transforms) or rex in your SPL to extract fields at search time For Eg: | rex field=_raw "Module:(?<Module>[^\n]+)" | rex field=_raw "Microflow:\s*(?<Microflow>[^\n]+)" | rex field=_raw "latesteror_message:(?<latesteror_message>[^\n]+)" | rex field=_raw "http status:\s*(?<http_status>\d+)" | rex field=_raw "Http reasonphrase\s*(?<Http_reasonphrase>[^\n]+)" But best practice is to structure the data at source itself. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Not at this time. Splunk can auto-extract values only if the whole _raw message consists of the structured data blob. There is an open idea on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-... See more...
Not at this time. Splunk can auto-extract values only if the whole _raw message consists of the structured data blob. There is an open idea on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-208 It is marked as future prospect but of course voting on this issue might provide some additional push. The alternative would be to cut the remainder of the event so that only the json part is left but this way you're losing some data.
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to ext... See more...
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to extract the fields from the message part? Example Module:SplunkTest Microflow: ACT_Omnext_Create latesteror_message:Access denied.. http status: 401 Http reasonphrase Access denied... Or is this data should be structured from Mendix and send to Splunk? Thanks for any suggestion.
Well... There are multiple things here. 1. _time vs _indextime - this doesn't have to have anything to do with the performance of the indexing pipeline. There can be multiple reasons for this from p... See more...
Well... There are multiple things here. 1. _time vs _indextime - this doesn't have to have anything to do with the performance of the indexing pipeline. There can be multiple reasons for this from pipelines clogging to drifting time on the sources. It would need more detailed troubleshooting to find out the reason behind this. 2. As a rule of thumb, indexed extractions are bad. While sometimes it is the "only way" using built-in Splunk mechanisms (for example - ingesting CSVs with variable order of columns), it is generally better to have the data pre-processed with external tool to transform it to a format more suitable for normal indexing. 3. Since you are talking about json data with indexed extractions, I suspect you're using the bulit-in _json sourcetype which should not be used in production. The way to go would be to define your own sourcetype using kv_mode=json but not using indexed extractions. You can't selectively not index some fields when using indexed extractions. And single indexed fields are very tricky with structured data. I wouldn't recommend it.  
@sverdhan  Try below with clients, | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | lookup appserverdomainmap... See more...
@sverdhan  Try below with clients, | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | lookup appserverdomainmapping.csv clients OUTPUT NewIndex, Domain, Sourcetype | eval NewIndex=NewIndex.sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex If you do not need to add clients, and to just display lookup fields you can use appendcols | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | appendcols [| inputlookup appserverdomainmapping.csv | fields Domain, Sourcetype, NewIndex] | eval NewIndex=NewIndex.sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@sgarcia  Blank h (host) values in license usage logs are due to Splunk's "squashing" process. This occurs to optimize log storage and performance when there are too many unique values to track ind... See more...
@sgarcia  Blank h (host) values in license usage logs are due to Splunk's "squashing" process. This occurs to optimize log storage and performance when there are too many unique values to track individually If possible, analyze your actual event data Eg: index=wineventlog | stats sum(len(_raw)) as bytes by host | eval GB=round(bytes/1024/1024/1024, 2) | sort -GB Also check Licensing usage reports and to split by host for last 60 days if available or try from metrics.log to identify which ones have high thruput. index="_internal" source="*metrics.log" group="per_host_thruput" Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @sgarcia, This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be fro... See more...
Hello @sgarcia, This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be from metrics.log by using either per_host_thruput, per_source_thruput, per_sourcetype_thruput. With this thruput, you can look at the series field to know which component has the highest volume. Additionally, the squash_threshold can be configured in limits.conf, but it is NOT advisable to update the limits without consulting Splunk Support because it can cause heavy memory issues if increased from the default value. Thanks, Tejas. 
Hello @vishalduttauk, Can you provide the complete search that you used for migrating the data from one server to other? Thanks, Tejas. 
Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table... See more...
Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table becaue that would complex things. CAn yiu please tell m eothe rways to do it maybe through join or something.   Much appreciated.
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with br... See more...
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with breaking the events and maintaining the JSON format for further field extractions.   Thanks, Tejas.