All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you... See more...
It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you clarify what you are trying to do, perhaps provide some sample (anonymosed) events so we can see what you are dealing with, and an example of your expected result?
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to le... See more...
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to let one of my indexes use the smart store, when I restart splunk it basically hangs on this step: Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...  Nothing found in logs, I am just puzzled how to fix this. Can anybody hint what can be the issue? indexes.conf: [volume:s3volumeone] storageType = remote path = s3://some-bucket-name remote.s3.endpoint = https://s3.us-west-2.amazonaws.com [smart_store_index_10] remotePath = volume:s3volumeone/$_index_name homePath = $SPLUNK_DB/$_index_name/db coldPath = $SPLUNK_DB/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb maxGlobalDataSizeMB = 0 maxGlobalRawDataSizeMB = 0 homePath.maxDataSizeMB = 1000 maxHotBuckets = 2 maxDataSize = 3 maxWarmDBCount = 5 frozenTimePeriodInSecs = 10800 small numbers for bucket size etc. are intentional to allow quick testing of settings.
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_ho... See more...
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_hour=$end_hour_token$ date_minute<$end_minute_token$))) If you don't have those fields extracted, then you will have to do an eval statement to create the date_hour and date_minute fields and then do a where clause to do the same comparison as above.
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create ... See more...
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create the statistics which it later adds to results. With a small data set you can get away with just calling eventstats and processing the results further. If your initial result set is big you might indeed want to limit set of processed fields (including removing _raw if it's no longer needed).
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmet... See more...
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmethod   generates just one series .   I would like to join both timecharts and kind of merge "count by" with simple "avg" or "sum" so  : -first one 'stacked bar' from second example -second one 'line' from second series of the first example   Any hints ?   K.  
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, s... See more...
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, so in the subsearch you can do something like this to remove the entries from the lookup that do NOT fit in the time range of the search [ | inputlookup lookup.csv | addinfo | eval first_time=strftime(info_min_time, "%H") | eval last_time=strftime(info_max_time, "%H") | rex field=Time "[^ ]* (?<hour>\d+)" | where hour>=first_time AND hour<=last_time ] This is taking the HOUR part of your lookup Time value and comparing that to the search time range and only retaining the lookup entries that match the time range of your search, so when you combine the entries after this subsearch, only those from the lookup that are relevant to the range are collected with the real time data.
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above sh... See more...
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above should tell you which hosts need to be looked at where you would remove INDEXED_EXTRACTIONS = json from the SHs and Indexers and move this configuration (INDEXED_EXTRACTIONS = json) to the forwarders props.conf. Make sure the forwarder inputs.conf for the json source you are ingesting is tagging the data with the appropriate sourcetype, then in props.conf reference that sourcetype stanza for your config: ie (UF): inputs.conf [monitor:///file] sourcetype=foo_json index=bar props.conf [foo_json] INDEXED_EXTRACTIONS = json     see:https://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Configurationparametersandt[…]A.&_ga=2.147263155.568450395.1710801981-1206481253.1693859797 INDEXED_EXTRACTIONS are unique in that they happen in the structured parsing queue of the universal forwarder where usually parsing happens at a HF or indexer if there is no HF. if you use a HF as the first point of ingest and no UF then you place it there on the HF. see: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Extractfieldsfromfileswithstructureddata If you have Splunk Cloud Platform and want configure the extraction of fields from structured data, use the Splunk universal forwarder.  
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a ... See more...
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a single field containing the html for a table in the arrangement you want, then put that field in the body.
As in running the same search that another user has previously run, but in a different time period?
At first glance it seems your field/argument "userAccountPropertyFlag" ends with a 'd' character when passed to the script: "userAccountPropertyFlad"   If that doesn't fix it, you may be able to fi... See more...
At first glance it seems your field/argument "userAccountPropertyFlag" ends with a 'd' character when passed to the script: "userAccountPropertyFlad"   If that doesn't fix it, you may be able to find more informational errors by searching in the internal error logs relating to this script: index=_internal user_account_control_property.py log_level=ERROR  
Hi @titchynz , was wondering if you found a solution for this.  We are experiencing the exact same thing verbatim and was hoping perhaps you'd done all of the hard work and have a solution that yo... See more...
Hi @titchynz , was wondering if you found a solution for this.  We are experiencing the exact same thing verbatim and was hoping perhaps you'd done all of the hard work and have a solution that you could share. Thanks!
This is an odd error. Could you try opening the file at "/opt/splunk/lib/python3.7/ctypes/__init__.py", then commenting the line: CFUNCTYPE(c_int)(lambda: None) It should be on line 273. Put a ha... See more...
This is an odd error. Could you try opening the file at "/opt/splunk/lib/python3.7/ctypes/__init__.py", then commenting the line: CFUNCTYPE(c_int)(lambda: None) It should be on line 273. Put a hash before it, so your _reset_cache() function looks like this: def _reset_cache(): _pointer_type_cache.clear() _c_functype_cache.clear() if _os.name == "nt": _win_functype_cache.clear() # _SimpleCData.c_wchar_p_from_param POINTER(c_wchar).from_param = c_wchar_p.from_param # _SimpleCData.c_char_p_from_param POINTER(c_char).from_param = c_char_p.from_param _pointer_type_cache[None] = c_void_p # XXX for whatever reasons, creating the first instance of a callback # function is needed for the unittests on Win64 to succeed. This MAY # be a compiler bug, since the problem occurs only when _ctypes is # compiled with the MS SDK compiler. Or an uninitialized variable? #CFUNCTYPE(c_int)(lambda: None) Then try testing the upgrade
One thing to note is that the strptime does not work with just a month and year, it needs a day value as well. ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFuncti... See more...
One thing to note is that the strptime does not work with just a month and year, it needs a day value as well. ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions If the RegUser string has only a year and month, you could format it to include the day 01. E.g. | eval RefUser = RefUser+"/01" | eval RefUser = strptime(RefUser,"%Y/%m/%d") For the RefAtual, I assume you are taking 1 month earlier than the current _time value: | eval RefAtual = relative_time(_time, "-1mon") Once you have these two timestamps in unixtime format, then you can take the absolute difference between them, and divide by the number of seconds in a month (assuming 30 days in a month, that is 60*60*24*30). Then set the number of digits to round to | eval months_between = abs(RefAtual - refUser) / (60*60*24*30) | eval months_between = round(months_between,1)
Without knowing about your changes, it's hard to say what's happening. If you manually created or changed any .conf files though, I would check ownership and make sure they are owned by the splunk us... See more...
Without knowing about your changes, it's hard to say what's happening. If you manually created or changed any .conf files though, I would check ownership and make sure they are owned by the splunk user. I've seen bundle validations fail when something doesn't have proper ownership.  
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdow... See more...
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdowns to my dashboard to filter this data by a user-selected time window for every day in the one month range.  The four dropdowns correspond to the start hour, start minute, end hour, and end minute of the time window in military time.  For example, to filter the data by 6:30 AM - 1:21 PM each day, the tokens would have the following values:   $start_hour_token$: '6' $start_minute_token$: '30' $end_hour_token$: '13' $end_minute_token$: '21'   How would I modify the original query to make ths work? Thanks! Jonathan
Is there a way to create a query to show the errors from splunk TA and kv store 
I try today with  that "allowSorting": false in my json code and is not working. Share my code: {     "type": "splunk.table",     "dataSources": {         "primary": "ds_z5csywhv"     },     "... See more...
I try today with  that "allowSorting": false in my json code and is not working. Share my code: {     "type": "splunk.table",     "dataSources": {         "primary": "ds_z5csywhv"     },     "title": "Hits",     "description": "Daily",     "options": {         "columnFormat": {             "Daily_Hits": {                 "width": 65,                 "align": [                     "center"],                 "allowSorting": false             }         },         "tableFormat": {             "headerBackgroundColor": "#FFA476",             "headerColor": "#772800"                     },         "count": 5,         "fontSize": "small",         "font": "proportional",         "showRowNumbers": true             },     "context": {},     "showProgressBar": false,     "showLastUpdated": false }
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the fu... See more...
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the full name  server_xyz_0. Please help. Thanks
The filldown command is not going to help me here. I don't think that I properly explained what I wanted to do. The original table looks like this: Hostname Vendor Product Version hostname1... See more...
The filldown command is not going to help me here. I don't think that I properly explained what I wanted to do. The original table looks like this: Hostname Vendor Product Version hostname1 Vendor1 Vendor2 Vendor3 Vendor4 Product1 Product2 Product3 Prodcut4 Version1 Version2 Version3 Version4 hostname2 Vendor1 Vendor2 Vendor3 Vendor4 Product2 Product4 Product3 Prodcut6 Version2 Version1 Version5 Version3 I want the new table to look like this: Hostname Vendor Product Version hostname1 Vendor1 Product1 Version1 hostname1 Vendor2 Product2 Version2 hostname1 Vendor3 Product3 Version3 hostname1 Vendor4 Product4 Version4 hostname2 Vendor1 Product2 Version2 hostname2 Vendor2 Product4 Version1 hostname2 Vendor3 Product3 Version5 hostname2 Vendor4 Product6 Version3
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to rest... See more...
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to restrict some features within the app based on Splunk custom user role they have been assigned to.  Thank you.