All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmet... See more...
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmethod   generates just one series .   I would like to join both timecharts and kind of merge "count by" with simple "avg" or "sum" so  : -first one 'stacked bar' from second example -second one 'line' from second series of the first example   Any hints ?   K.  
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, s... See more...
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, so in the subsearch you can do something like this to remove the entries from the lookup that do NOT fit in the time range of the search [ | inputlookup lookup.csv | addinfo | eval first_time=strftime(info_min_time, "%H") | eval last_time=strftime(info_max_time, "%H") | rex field=Time "[^ ]* (?<hour>\d+)" | where hour>=first_time AND hour<=last_time ] This is taking the HOUR part of your lookup Time value and comparing that to the search time range and only retaining the lookup entries that match the time range of your search, so when you combine the entries after this subsearch, only those from the lookup that are relevant to the range are collected with the real time data.
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above sh... See more...
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above should tell you which hosts need to be looked at where you would remove INDEXED_EXTRACTIONS = json from the SHs and Indexers and move this configuration (INDEXED_EXTRACTIONS = json) to the forwarders props.conf. Make sure the forwarder inputs.conf for the json source you are ingesting is tagging the data with the appropriate sourcetype, then in props.conf reference that sourcetype stanza for your config: ie (UF): inputs.conf [monitor:///file] sourcetype=foo_json index=bar props.conf [foo_json] INDEXED_EXTRACTIONS = json     see:https://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Configurationparametersandt[…]A.&_ga=2.147263155.568450395.1710801981-1206481253.1693859797 INDEXED_EXTRACTIONS are unique in that they happen in the structured parsing queue of the universal forwarder where usually parsing happens at a HF or indexer if there is no HF. if you use a HF as the first point of ingest and no UF then you place it there on the HF. see: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Extractfieldsfromfileswithstructureddata If you have Splunk Cloud Platform and want configure the extraction of fields from structured data, use the Splunk universal forwarder.  
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a ... See more...
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a single field containing the html for a table in the arrangement you want, then put that field in the body.
As in running the same search that another user has previously run, but in a different time period?
At first glance it seems your field/argument "userAccountPropertyFlag" ends with a 'd' character when passed to the script: "userAccountPropertyFlad"   If that doesn't fix it, you may be able to fi... See more...
At first glance it seems your field/argument "userAccountPropertyFlag" ends with a 'd' character when passed to the script: "userAccountPropertyFlad"   If that doesn't fix it, you may be able to find more informational errors by searching in the internal error logs relating to this script: index=_internal user_account_control_property.py log_level=ERROR  
Hi @titchynz , was wondering if you found a solution for this.  We are experiencing the exact same thing verbatim and was hoping perhaps you'd done all of the hard work and have a solution that yo... See more...
Hi @titchynz , was wondering if you found a solution for this.  We are experiencing the exact same thing verbatim and was hoping perhaps you'd done all of the hard work and have a solution that you could share. Thanks!
This is an odd error. Could you try opening the file at "/opt/splunk/lib/python3.7/ctypes/__init__.py", then commenting the line: CFUNCTYPE(c_int)(lambda: None) It should be on line 273. Put a ha... See more...
This is an odd error. Could you try opening the file at "/opt/splunk/lib/python3.7/ctypes/__init__.py", then commenting the line: CFUNCTYPE(c_int)(lambda: None) It should be on line 273. Put a hash before it, so your _reset_cache() function looks like this: def _reset_cache(): _pointer_type_cache.clear() _c_functype_cache.clear() if _os.name == "nt": _win_functype_cache.clear() # _SimpleCData.c_wchar_p_from_param POINTER(c_wchar).from_param = c_wchar_p.from_param # _SimpleCData.c_char_p_from_param POINTER(c_char).from_param = c_char_p.from_param _pointer_type_cache[None] = c_void_p # XXX for whatever reasons, creating the first instance of a callback # function is needed for the unittests on Win64 to succeed. This MAY # be a compiler bug, since the problem occurs only when _ctypes is # compiled with the MS SDK compiler. Or an uninitialized variable? #CFUNCTYPE(c_int)(lambda: None) Then try testing the upgrade
One thing to note is that the strptime does not work with just a month and year, it needs a day value as well. ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFuncti... See more...
One thing to note is that the strptime does not work with just a month and year, it needs a day value as well. ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions If the RegUser string has only a year and month, you could format it to include the day 01. E.g. | eval RefUser = RefUser+"/01" | eval RefUser = strptime(RefUser,"%Y/%m/%d") For the RefAtual, I assume you are taking 1 month earlier than the current _time value: | eval RefAtual = relative_time(_time, "-1mon") Once you have these two timestamps in unixtime format, then you can take the absolute difference between them, and divide by the number of seconds in a month (assuming 30 days in a month, that is 60*60*24*30). Then set the number of digits to round to | eval months_between = abs(RefAtual - refUser) / (60*60*24*30) | eval months_between = round(months_between,1)
Without knowing about your changes, it's hard to say what's happening. If you manually created or changed any .conf files though, I would check ownership and make sure they are owned by the splunk us... See more...
Without knowing about your changes, it's hard to say what's happening. If you manually created or changed any .conf files though, I would check ownership and make sure they are owned by the splunk user. I've seen bundle validations fail when something doesn't have proper ownership.  
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdow... See more...
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdowns to my dashboard to filter this data by a user-selected time window for every day in the one month range.  The four dropdowns correspond to the start hour, start minute, end hour, and end minute of the time window in military time.  For example, to filter the data by 6:30 AM - 1:21 PM each day, the tokens would have the following values:   $start_hour_token$: '6' $start_minute_token$: '30' $end_hour_token$: '13' $end_minute_token$: '21'   How would I modify the original query to make ths work? Thanks! Jonathan
Is there a way to create a query to show the errors from splunk TA and kv store 
I try today with  that "allowSorting": false in my json code and is not working. Share my code: {     "type": "splunk.table",     "dataSources": {         "primary": "ds_z5csywhv"     },     "... See more...
I try today with  that "allowSorting": false in my json code and is not working. Share my code: {     "type": "splunk.table",     "dataSources": {         "primary": "ds_z5csywhv"     },     "title": "Hits",     "description": "Daily",     "options": {         "columnFormat": {             "Daily_Hits": {                 "width": 65,                 "align": [                     "center"],                 "allowSorting": false             }         },         "tableFormat": {             "headerBackgroundColor": "#FFA476",             "headerColor": "#772800"                     },         "count": 5,         "fontSize": "small",         "font": "proportional",         "showRowNumbers": true             },     "context": {},     "showProgressBar": false,     "showLastUpdated": false }
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the fu... See more...
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the full name  server_xyz_0. Please help. Thanks
The filldown command is not going to help me here. I don't think that I properly explained what I wanted to do. The original table looks like this: Hostname Vendor Product Version hostname1... See more...
The filldown command is not going to help me here. I don't think that I properly explained what I wanted to do. The original table looks like this: Hostname Vendor Product Version hostname1 Vendor1 Vendor2 Vendor3 Vendor4 Product1 Product2 Product3 Prodcut4 Version1 Version2 Version3 Version4 hostname2 Vendor1 Vendor2 Vendor3 Vendor4 Product2 Product4 Product3 Prodcut6 Version2 Version1 Version5 Version3 I want the new table to look like this: Hostname Vendor Product Version hostname1 Vendor1 Product1 Version1 hostname1 Vendor2 Product2 Version2 hostname1 Vendor3 Product3 Version3 hostname1 Vendor4 Product4 Version4 hostname2 Vendor1 Product2 Version2 hostname2 Vendor2 Product4 Version1 hostname2 Vendor3 Product3 Version5 hostname2 Vendor4 Product6 Version3
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to rest... See more...
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to restrict some features within the app based on Splunk custom user role they have been assigned to.  Thank you. 
Check out the filldown command.
Leaving it as is. SplunkForwarder folder and contents within are owned by root and wheel Applications Folder is owned by root and admin
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1 ... See more...
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1      product1             version1                           vendor2      product2             version2                           vendor3      product3             version3                           vendor4      product4             version4 ----------------------------------------------------------------- hostname2 vendor1      product2             version2                          vendor2      product4             version1                          vendor3      product3             version5                          vendor4      product6             version3 ----------------------------------------------------------------- In this scenario, each hostname has a list of vendors, products and versions attached to it. What I want to create is the following: Hostname      Vendor      Product        Version hostname1    vendor1   product1      version1 hostname1    vendor2   product2      version2 hostname1    vendor3   product3      version3 hostname1    vendor4   product4      version4 hostname2    vendor1   product2      version2 hostname2    vendor2   product4      version1 hostname2    vendor3   product3      version5 hostname2    vendor4   product6      version3   Does anyone have any ideas?
Interesting! Thanks for this; I'll review and give this a try. One question:  Are you creating a Splunk user and changing permissions recursively to splunk:splunk, or are you just leaving it as-is? ... See more...
Interesting! Thanks for this; I'll review and give this a try. One question:  Are you creating a Splunk user and changing permissions recursively to splunk:splunk, or are you just leaving it as-is? (To this point, we've been doing the latter, but I'm wondering if creating a dedicated user might be preferable?)