All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers; I'm trying to create a table but can't and don't understand how to do... I'd like to calculate stats avg, max and exactperc90 on 8 fields values adding a by 2 other fields values. my... See more...
Hi Splunkers; I'm trying to create a table but can't and don't understand how to do... I'd like to calculate stats avg, max and exactperc90 on 8 fields values adding a by 2 other fields values. my problem is to use as a field value his own field name. I'm not shure my explains are clear, tried to draw it. any one have done this before? Thanks a lot for support.
Hi, When using sendemail cmd like this: | makeresults | sendemail to="aaaaaa", from="bbbbbb" , subject="" message="how do I get rid of the redundant characters "---" ", sendresults=false inli... See more...
Hi, When using sendemail cmd like this: | makeresults | sendemail to="aaaaaa", from="bbbbbb" , subject="" message="how do I get rid of the redundant characters "---" ", sendresults=false inline=true format=table content_type=plain footer="i-Alert" I receive: "how do I get rid of the redundant characters "---" ------------------------------------------------------------------------ i-Alert" I realy like to get rid of the "--------/ /---" string, especially when I want ot use sendemail for sms/txt messages. This is  useless waist of text-space/length per sms / text-message. I have looked in sendemail.py and found 3 strings like "---- ---" but removing it had no effect on the result.  Where should I look futher for removing this useless string of characters? regards AshleyP
Hi everyone, first post here. Hopefully I'm in the right location.  Recently installed the File/Directory Information Input add-on to try capturing file creation/modified timestamps and permissions.... See more...
Hi everyone, first post here. Hopefully I'm in the right location.  Recently installed the File/Directory Information Input add-on to try capturing file creation/modified timestamps and permissions. Attempting local inputs from a Splunk Enterprise server and UF (both Windows), but each it will not capture the file owner or ace permissions. Not seeing any errors in file_meta_data_modular_input.log. Python 2.7 installed on each instance. This is all I get:     is_directory=1 file_count=3 directory_count=0 path=C:\test atime="Tue Oct 6 16:31:22 2020" atime_epoch=1602016282.55 ctime="Tue Oct 6 16:31:18 2020" ctime_epoch=1602016278.12 dev=0 gid=0 ino=0 mode=16895 mtime="Tue Oct 6 16:31:22 2020" mtime_epoch=1602016282.55 nlink=0 size=4096 uid=0 time="Wed Oct 07 07:23:26 2020"     inputs.conf     [file_meta_data://default] file_path = C:\test interval = 15m recurse = 1 only_if_changed = 0 include_file_hash = 0 file_hash_limit = 500MB sourcetype = net:shares index = test      Any thoughts on how to troubleshoot this?  @LukeMurphey  Thanks
After filling in the connection fields, and clicking save the Splunk returns the following error, "There has been an error processing your request. It has been logged (ID ea4e0b8f28c79b16)." Could s... See more...
After filling in the connection fields, and clicking save the Splunk returns the following error, "There has been an error processing your request. It has been logged (ID ea4e0b8f28c79b16)." Could someone help me? splunk version :7.2.4 db connect version :3.1.2 MySql database with version 4.2 driver
Hi, I am using Splunk Enterprise Version:8.0.1 I bunch of indexes and roles for users. I created a role that has only one capability "search" and only permission to one index (included, default). ... See more...
Hi, I am using Splunk Enterprise Version:8.0.1 I bunch of indexes and roles for users. I created a role that has only one capability "search" and only permission to one index (included, default). When I run search command eg. * from Search & Reporting's search I get expected result, only search results from the index that user has permission. If I run search: index IN("<index_with_permission>" "<index_not_permission_in_role") I get all the results also from the index that user has no permission. Any idea what could be the issue?
Hi There,  we have a search which covers multiple values as below (each field has a single value) | chart count(serviceName) as total avg(totalFrontendLatency) as elapsetime max(totalFrontendLat... See more...
Hi There,  we have a search which covers multiple values as below (each field has a single value) | chart count(serviceName) as total avg(totalFrontendLatency) as elapsetime max(totalFrontendLatency) as maxelapsetime I wanted to add two extra results to the same search, but this time the field has two values for example: if we want count of a field "Processed", it has two cases case1: Processed=true case1: Processed=false How to count these by true or false and show in the same table as above? Please help.
Hello, I'm trying to change the write permission for kvstore lookup definition with admin user the lookup definition itself is global and i want to add write permissions to another user. i'm getti... See more...
Hello, I'm trying to change the write permission for kvstore lookup definition with admin user the lookup definition itself is global and i want to add write permissions to another user. i'm getting this error : cannot reassign privately shared entity to unowned   what can i do ? thanks Sarit
I have a search which counts all ids events of the last 12 months by the severity. This search needs really long to run. So I dont want to rerun it every month for the complete time range of 12 month... See more...
I have a search which counts all ids events of the last 12 months by the severity. This search needs really long to run. So I dont want to rerun it every month for the complete time range of 12 months. I would like to just add the last month and search this over 6 monts. My actual search is following: index=ids  earliest="-1year@month" latest="@month" | fields severity | timechart count by severity span=1mon   Thank you in advance
How to write a query for getting data that is not present in lookup table, compare the input data with lookup table and find only the data is not present in lookup table. already tried below query n... See more...
How to write a query for getting data that is not present in lookup table, compare the input data with lookup table and find only the data is not present in lookup table. already tried below query not working index=abc sourcetype=xyz | lookup data.csv ccid OUTPUT ccid AS found | where isnull(found)  
Good day, I am having an issue where all users are randomly and incorrectly logged out (session timeout) while actively using the dashboard. I noticed that every time this occurs the splunk service... See more...
Good day, I am having an issue where all users are randomly and incorrectly logged out (session timeout) while actively using the dashboard. I noticed that every time this occurs the splunk service has restarted. Every "ps -ef |grep splunk" gives a new timestamp after a session timeout so Splunk is restarting for some reason. I am also seeing these these logs in splunkd.log (see attached image)   And then there is this but the permissions look correct. ERROR ExecProcessor - message from "/usr/bin/timeout -s9 10m /opt/ping-scripts/ping_checks.sh -index ping-check" /usr/bin/timeout: failed to run command ‘/opt/ping-scripts/ping_checks.sh’: Permission denied I have also set the following web.conf tools.sessions.timeout = 100000 ui_inactivity_timeout = 100000   server.conf sessionTimeout = 4h   I also ran “sudo /opt/splunk/bin/splunk btool check –debug” to check for conf file syntax errors and corrected them all. Splunk does not give any errors when starting up.   Thank you kindly!
Greetings! I developed Service, KPI in Splunk ITSI and configured correlation search to get alert with alert_value ($result.alert_value$) when KPI health score change to Critical.  I am receivi... See more...
Greetings! I developed Service, KPI in Splunk ITSI and configured correlation search to get alert with alert_value ($result.alert_value$) when KPI health score change to Critical.  I am receiving alert correctly but alert_value always shows 0.0 (which is health score value, not the threshold field value) but I am expecting it to be the threshold field value. Is it possible to pass this threshold field value to correlation search or help guide me from where I can get this info from itsi summary index or ... ? Here is the threshold field value (ex.,) when I run the search from Splunk ITSI (Generated Search in KPI) Thank you. 
I have a dashboard row with two panels, where the width of the panels is set in javascript to 20% and 80% using this script.  require(['jquery', 'splunkjs/mvc/simplexml/ready!'], function($) { /... See more...
I have a dashboard row with two panels, where the width of the panels is set in javascript to 20% and 80% using this script.  require(['jquery', 'splunkjs/mvc/simplexml/ready!'], function($) { // Grab the DOM for the score row var scoreRow = $('#score_row'); // Get the dashboard cells (which are the parent elements of the actual panels and define the panel size) var panelCells = $(scoreRow).children('.dashboard-cell'); // Adjust the cells' width $(panelCells[0]).css('width', '20%'); $(panelCells[1]).css('width', '80%'); }); This works nicely for the dashboard configured with <row id="score_row"> <panel>...<panel> <panel>...<panel> </row>  however, as soon as I add depends="$var$" to the <row>, when var is set and the row displayed, the two panel widths are set to 50%. If I inspect the page before the token is set, this is the raw html but as soon as var is set, the html changes to  Note score_row is hidden class in the former, but as expected is not hidden after, but the widths are now 50%. Any idea how to control this redraw, so the JS effects the displayed size as intended?  
I need to check the logs against Workstation XYZ to ensure no one else besides JDOE has logged into it from 9/15/20 00:00:00.000  until 9/22/20. 23:59;59.999 (7-days).  What would be the best SPL to ... See more...
I need to check the logs against Workstation XYZ to ensure no one else besides JDOE has logged into it from 9/15/20 00:00:00.000  until 9/22/20. 23:59;59.999 (7-days).  What would be the best SPL to run to check this? 
Hi All,  In our distributed deployment we are getting the issue where 100% schedule searches are skipped failing due to "searchable rolling restart or upgrade is in progress"".  Can anybody please ... See more...
Hi All,  In our distributed deployment we are getting the issue where 100% schedule searches are skipped failing due to "searchable rolling restart or upgrade is in progress"".  Can anybody please suggest the troubleshooting steps on how to rectify this issue?  Thanks and regards AG. 
I receive the below error intermixingly in the UF metrics log and indexer is not receiving any log from this host. This error goes after sometime and log automatically start to flow. Please let me kn... See more...
I receive the below error intermixingly in the UF metrics log and indexer is not receiving any log from this host. This error goes after sometime and log automatically start to flow. Please let me know what could be the reason. How I can troubleshoot. destPort 9996, eventtype=connect_fail , publisher=tcpout  sourcePort=8089 statusee=TcpOutputProcessor
Hi everyone,  In advance, thanks for reading and responding.  I have an existing issue when using (with python) Splunk SDK and Rest to perform a search.  In this case i am attempting to query for ... See more...
Hi everyone,  In advance, thanks for reading and responding.  I have an existing issue when using (with python) Splunk SDK and Rest to perform a search.  In this case i am attempting to query for details given a SMTP message ID (the query parameter) .  Additionally, to note that about 75% of the queries work as expected and return the data while the rest will indicate that there are no results. (and of course, i can confirm that there is data through the gui)    As an example, the search query will return results for parameter (1) (2) and (3) but not for parameter (4) (5)  92037848562344152638461b32.1739vb98635.290-9302924841.1701506175.7300a656@mail00.cat66.vvvv.net   AM7P191MB0581C4397B54F7DA07DD3DAF840D0@AM7P191MB0581.EURP191.PROD.OUTLOOK.COM  WHGD892HSG6328EA0C84C32E79576307E810D0@VXBSGHD82978GS.US9978WS.PRUDD.OUTLOOK.COM PHJKUYU4758WHD74393JHEHE7387648Y3B0CC40D0@DSE334WS01MB4950.DEVDEV.predd.exchange.com  MU98SAHKJ8E87495023B503385D6E36513B0CC40D0@TEUYS899WK93UE3.DROID.svrti.resound.com I am really confused about the reason behind this issue. 
Hi, I have the following search: search | spath input=rawJsonData output=UserActionAttributes path=UserActionAttributes | rex max_match=0 field=UserActionAttributes "pn:(?<partNumber>\d{4,5}[a-zA-... See more...
Hi, I have the following search: search | spath input=rawJsonData output=UserActionAttributes path=UserActionAttributes | rex max_match=0 field=UserActionAttributes "pn:(?<partNumber>\d{4,5}[a-zA-Z]\d{1,3})" | rex max_match=0 field=UserActionAttributes "compNm:((?<compName>(.*?))\")" | eval zip = mvzip(partNumber,compName) | mvexpand zip | eval zip= split(zip,",") | eval partNumber=mvindex(zip,0) | eval compName=mvindex(zip,1) | dedup bid, partNumber,compName | eval compcheck = if(like(partNumber,"%".compName."%"),"Contained","Not Contained") | table partNumber,compName, compcheck   The returned values could look like this (compName is not uniform, it could really be anything. Sometimes doesn't contain part number at all, which is what I'm looking for) partNumber: 333T4343 compName: blahblah_333T4343_blah I want the compcheck to tell me that compName contains partNumber in this case, but it's not working. I've also tried (also does not work): | eval compcheck = if(match(partNumber,compName),"Contained","Not Contained") Please help  
I have an Alpine image with splunk forwarder installed in it. I am trying to monitor one log file from with in the container and log it to the splunk server. Step 1: Base image with Alpine and unzip... See more...
I have an Alpine image with splunk forwarder installed in it. I am trying to monitor one log file from with in the container and log it to the splunk server. Step 1: Base image with Alpine and unzipped splunkforwarder.tgz file Step 2: Using this base image to dockerize my node application.                RUN ../../../opt/splunk/bin/splunk start --accept-license --answer-yes --seed-passwd password                RUN ../../../opt/splunk/bin/splunk add forward-server server-ip:9997 -auth admin:username                RUN ../../../opt/splunk/bin/splunk add monitor ./logs -sourcetype logs                RUN ../../../opt/splunk/bin/splunk restart                RUN ../../../opt/splunk/bin/splunk enable boot-start #dont know if its needed I am still not receiving any logs in the splunk server. Is there anything I need to do for the spluk forwarder to communicate with splunk-server from with in the container. Thank you.
Let's say you have the following search: ... | stats sum(eval(sc_bytes/1073741824)) AS Gigabytes BY date   The resulting values in the Gigabytes column may have many characters after the decim... See more...
Let's say you have the following search: ... | stats sum(eval(sc_bytes/1073741824)) AS Gigabytes BY date   The resulting values in the Gigabytes column may have many characters after the decimal point.  In a results table or a dashboard one may format the values with commas or define precision in order to make the information easier to read at a glance.    Is there a way to change how these values are displayed without changing the underlying information from the search?   I know the following may be used to convert the values to a string, but is there a way to change the way these values are displayed without changing the number - perhaps you want to store it for later formulas? ... | stats sum(eval(sc_bytes/1073741824)) AS Gigabytes BY date | eval Gigabytes=printf("%.4f",Gigabytes)  
Hi Everyone, I have one requirement. I have multiple dashboards . I want to calculate the usage of the dashboards based on the search user. Below are my search's. SplunkMetadataCounter JenkinsBui... See more...
Hi Everyone, I have one requirement. I have multiple dashboards . I want to calculate the usage of the dashboards based on the search user. Below are my search's. SplunkMetadataCounter JenkinsBuildReport Extract ...... ...... For individual I am using this query: index="_internal" SplunkMetadataCounter |stats count by search user. I am getting the result like this: search                                                                                                         user   count search+index_internal%22+SplunkMetadataCounter            kh     1 I want to calculate the dashboard usage on the basis of search and user: I want some query like this not sure this is accurate or not index="_internal" SplunkMetadataCounter | eval dashboard_name = (if search or sourcetype contains SplunkMetadataCounter ” then dashboard name is“SplunkMetadataCounter 2” else search or sourcetype contains JenkinsBuildReport then dashboard name is“BuildReports ” .................................................) stats count by search user   At the end I want one query which will tell me like SplunkMetadataCounter is used 40 times JenkinsBuildReport is used 20 times like that. SplunkMetadataCounter -40  JenkinsBuildReport -20 Extract -10 Can someone guide me on this.