All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, need help in using 2 stats operation in one program. My program:   index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex field=eventuei "uei.opennms.org/n... See more...
Hi All, need help in using 2 stats operation in one program. My program:   index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex field=eventuei "uei.opennms.org/nodes/node(?<Status>.+)" | stats max(_time) as Time latest(Status) as Status by nodelabel | table nodelabel,Status,Time | lookup ONMS_nodes.csv nodelabel OUTPUT sitecode | table nodelabel,Status,Time, sitecode   Output Table :  nodelabel Status Time sitecode AOKWA1 Down 1/20/2021 20:36 KWA BXTPR Down 1/20/2021 20:34 TPR UKHCO Down 1/20/2021 20:32 HCO AOKWA3 Down 1/20/2021 20:30 KWA AOKWA2 Down 1/20/2021 20:29 KWA AOKWA4 Down 1/20/2021 20:18 KWA   Here I have KWA 4 times. Requirement : i need to filter only those sitescodes which are greater than 3 numbers.( all KWA only should be listed in my required output table, ignore all with sitecode count less than 3)  
The goal is to get a count when a specific value exists 'by id'.  This is not working on a coalesced search. The search below works, it looks at two source types with different field names that ha... See more...
The goal is to get a count when a specific value exists 'by id'.  This is not working on a coalesced search. The search below works, it looks at two source types with different field names that have the same type of values.  I used this because appendcols is very computation costly and I try to avoid it as much as possible.  One alternative I tried was: count(eval(if(isnotnull(calcValue),1,null()))) Lastly I tried, however that one gives an error:  Error in 'stats' command: The eval expression for dynamic field 'if("total-calcValue">0,1,null())' is invalid. Error='Type checking failed. The '>' operator received different types.'.   count(eval(if("total-calcValue">0,1,null())))   Below is the full search   index=someindex (sourcetype1) OR (sourcetype2) | rex field="Some Amt" "-(?<Amount>[\d\.]+)" | convert num(Amount) | rename Amount as total-calcValue, total as total-charges, "Some ID" as miq, "Other Source Company Name" as "other_source_company_name" | eval mid=coalesce(mid,miq) | eval Company=coalesce(other_source_company_name,company_name) | stats count first(Company) AS "Company" sum(total-*) AS * count(eval(if(isnotnull("total-calcValue"),1,null()))) as "calcValue Count" by mid  
I am indexing very large files each day, each on the order of 20+GB.  I am using [batch] and move_policy = sinkhole such that the file is read, indexed and intentionally deleted. However, sometimes ... See more...
I am indexing very large files each day, each on the order of 20+GB.  I am using [batch] and move_policy = sinkhole such that the file is read, indexed and intentionally deleted. However, sometimes the # of events indexed are less than the # of events in the file. Here is the inputs.conf segment that applies. [batch:///my_path_to_the_file/*.import] move_policy = sinkhole sourcetype = my_sourcetype index = my_index crcSalt=<SOURCE> disabled = false These large files are being SFTP'd to the Heavy Forwarder / Dropbox and the transfer can take 15+ minutes to complete. I am wondering whether the [batch] process will take a snapshot of the file and index it sometime after it arrives but before the  transfer has completed. I am presuming that [batch] only looks at the file once. Essentially, can what I attempt to show in the following image actually occur?
Hi Cannot delete an Alert. There is no Delete Option under Alert's Edit Menu. Please advise how to delete. best regards Altin
Hi Splunkers, I'm having issues ingesting Windows DNS Server Analytical logs. What's strange is that I am able to pull Audit logs with the following details in the inputs.conf file: [WinEventLog://... See more...
Hi Splunkers, I'm having issues ingesting Windows DNS Server Analytical logs. What's strange is that I am able to pull Audit logs with the following details in the inputs.conf file: [WinEventLog://Microsoft-Windows-DNSServer/Audit] disabled=0 index=dns When I do the same thing for Analytical it does not work: [WinEventLog://Microsoft-Windows-DNSServer/Analytical] disabled=0 index=dns Has anyone had any luck here?  Thank you!
Hello ALL I want the alternative search  for the following search command |bin span=1W _time aligntime=latest  which gives same  output
I am having index (server_patching) which contain the details like changeNo, patching date etc of server which are patched on given date and Lookup table with lists all the servers which are present ... See more...
I am having index (server_patching) which contain the details like changeNo, patching date etc of server which are patched on given date and Lookup table with lists all the servers which are present in my environment. lets say I have 10 servers in lookup table (1,2,3,4,5,6.......10) out of which server 12,3 are patched on Nov 1st with patch number p1,p2,p3 server 4,5,6,7,8 are patched on Dec 1st  with patch number p4,p5,p6,p7,p8 Server 9,10 are patched in Jan 1st with patch number p9,p10 I want to write query which should give me the server name and patch number which are not patched in given time. example if I select Jan1 I should get result Server 1,2,3,4,5,6,7,8 Patch number p1,p2,p3,p4,p5,p6,p7,p8   thanks In advance
Hi we are using boolean and array as types for our KVStores. It have always worked but causes validation failures in appinspect and errors in search.log. Changing boolean to bool is easy, but wh... See more...
Hi we are using boolean and array as types for our KVStores. It have always worked but causes validation failures in appinspect and errors in search.log. Changing boolean to bool is easy, but what about array isn't it supported at all?   We found the boolean, array and cidr types in the REST kvstore docs: https://docs.splunk.com/Documentation/Splunk/8.1.1/RESTREF/RESTkvstore#Arrays   But these developer docs says only '( number | bool | string | time ) ' is allowed: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usingconfigurationfiles   So how and where can we use the Array type?   08-19-2020 08:58:41.873 +0200 ERROR CollectionConfigurationProvider - Invalid field type='boolean' for field='enabled' in collection='ux_applications' and app='mcg_uxm' 08-19-2020 08:58:41.873 +0200 ERROR CollectionConfigurationProvider - Invalid field type='array' for field='processes' in collection='ux_applications' and app='mcg_uxm' 08-19-2020 08:58:41.873 +0200 ERROR CollectionConfigurationProvider - Invalid field type='array' for field='tags' in collection='ux_applications' and app='mcg_uxm'     collections.conf [ux_applications] field.name = string field.customer = string field.contact = string field.external_analysis_link_url = string field.external_analysis_link_name = string field.external_trace_link_url = string field.external_trace_link_name = string field.type = string field.storage_used_mb = string field.storage_quota_mb = string field.tags = array field.processes = array field.enabled = boolean replicate = true
Hello Team, I've created a dashboard and there are some metrics displayed for which am defining their meaning in the report by using html tags. When am receiving the reports am getting below error P... See more...
Hello Team, I've created a dashboard and there are some metrics displayed for which am defining their meaning in the report by using html tags. When am receiving the reports am getting below error Parse error: saw </font> instead of expected </b>paragraph text u'<para><font face=Helvetica><para> <b>Note : </font><font face=HeiseiMin-W3>\u261e</font><font face=Helvetica><br/> <u>[1]</u>: <i>Availability_Percentage:</i> The HTML tag am using is as follows. Please help on how to rectify the error. My splunk version is 8.0.6   <html> <p> <b>Note : ☞<br/> <u>[1]</u>: <i>Availability_Percentage:</i> </b> It is the Percentage of Successful over Expected <br/> <b> <u>[2]</u>: <i>Success_Percentage: </i> </b> It is the Percentage of Successful Actual <br/> <b> <u>[3]</u>: <i>SLABreached_Percentage:</i> </b> It is the Percentage of SLA Breached Runs over Actual </p> </html>
It works to run the | curl method=get to my URL, but when setting it up as a modular input I get errors in the  ta_webtools_curl_inputs.log. I am running version 2.0.2 of the tool on Splunk Enterpr... See more...
It works to run the | curl method=get to my URL, but when setting it up as a modular input I get errors in the  ta_webtools_curl_inputs.log. I am running version 2.0.2 of the tool on Splunk Enterprise 8.0.4. The error: 2021-01-20 12:08:50,106 ERROR pid=125277 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-webtools/bin/ta_webtools/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-webtools/bin/curl_inputs.py", line 84, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-webtools/bin/input_module_curl_inputs.py", line 43, in collect_events if len(payload)>0: TypeError: object of type 'NoneType' has no len() Any pointers to what may cause this error? Perhaps  @jkat54 ?
Hi, We have a Splunk Cloud instance and are looking to switch from one SAML provider to another. The user attributes will stay the exact same. My question is that if we make the switch, will the ex... See more...
Hi, We have a Splunk Cloud instance and are looking to switch from one SAML provider to another. The user attributes will stay the exact same. My question is that if we make the switch, will the existing user accounts configured through the current SAML provider stay when we switch or will switching mean all existing accounts will be deleted and new accounts will be made? Thank you
Hi,  I am building a dashboard for my application being monitored in Splunk. As part of this i am getting the timestamp  of the latest login and latest logout . Below is a Sample output i am getting... See more...
Hi,  I am building a dashboard for my application being monitored in Splunk. As part of this i am getting the timestamp  of the latest login and latest logout . Below is a Sample output i am getting with the query i have built (Query provided below). The Output i am looking in this case is AAA & CCC as RMID. Kindly suggest RMID          LTIME                                                        OTIME AAA           19-01-2021 10:55:32:002                19-01-2021 08:32:32:001 BBB           19-01-2021 11:50:12:002                19-01-2021 12:52:32:001 CCC           19-01-2021 10:55:32:002                 Below is the search query i have index="XXX" kubernetes_namespace="uat" LoginStatus IN ("Authentication Success") | eventstats max(AuthenticationTime) as LoginTime  by RMID     |append  [search index="" kubernetes_namespace="uat" LoginStatus IN ("Logout Success") | eventstats max(AuthenticationTime) as LogoutTime  by RMID] |eventstats values(LoginTime) AS LTime, values(RMID) as RMID, values(LogoutTime) AS OTime | table  RMID, LTime, OTime | dedup RMID |eval  LoggedInTime = strptime(LoginTime,"%Y-%m-%d %H:%M:%S.%N") | eval  LoggedOutTime = strptime(OTime,"%Y-%m-%d %H:%M:%S.%N")      
We have created a custom streaming command which does a computation based on two fields and adds a third with the result.  This works fine on a single instance deployment, and on our search head clu... See more...
We have created a custom streaming command which does a computation based on two fields and adds a third with the result.  This works fine on a single instance deployment, and on our search head cluster when used with `makeresults`, however, once the search heads offload the task to the indexing cluster each indexer throws the error 'No module named 'splunklib''. The app has the custom command script in `./bin` and the `splunklib` directory from the SDK in `./lib/`.     #!/usr/bin/env python3 # other imports for computation from os.path import join as realpath, dirname from sys import path as syspath, argv, stdin, stdout LibPath = realpath(dirname(realpath(__file__)) + '/../lib/') syspath.insert(0, LibPath) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option @Configuration() class OurCommand(StreamingCommand): # Option definitions def stream(self, events): for event in events: # computation logic yield event dispatch(SubnetOfCommand, argv, stdin, stdout, __name__)     The full error from the log for each indexer is: 01-20-2021 09:37:33.390 INFO ChunkedExternProcessor - Running process: /opt/splunk/bin/python3.7 /opt/splunk/var/run/searchpeers/7E2EF370-95B8-474E-B6F6-47F96425213C-1611135318/apps/TA-ourcommand/bin/OurCommand.py 01-20-2021 09:37:33.437 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 01-20-2021 09:37:33.437 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/var/run/searchpeers/7E2EF370-95B8-474E-B6F6-47F96425213C-1611135318/apps/TA-ourcommand/bin/OurCommand", line 16, in <module> 01-20-2021 09:37:33.437 ERROR ChunkedExternProcessor - stderr: from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option 01-20-2021 09:37:33.437 ERROR ChunkedExternProcessor - stderr: ModuleNotFoundError: No module named 'splunklib' 01-20-2021 09:37:33.440 ERROR ChunkedExternProcessor - EOF while attempting to read transport header read_size=0 01-20-2021 09:37:33.440 ERROR ChunkedExternProcessor - Error in 'ourcommand' command: External search command exited unexpectedly with non-zero error code 1. 01-20-2021 09:37:33.445 ERROR SearchPipelineExecutor - sid:remote_searchead.fqdn__am9lLnBpdHQ_am9lLnBpdHQ_amlzY19jc2lydF9jdGk__search3_1611135450.4456_4BFB5A9E-3ADF-405B-B424-200C91CD6F72 Streamed search execute failed because: Error in 'ourcommand' command: External search command exited unexpectedly with non-zero error code 1..   It appears the custom search command is being copied to a temporary folder under `/opt/splunk/var/run/`; so wonder if the lib directory is being missed.   Any suggestions on how to resolve this would be really appreicated.
Running Splunk SDK examples on WIndows 10 with the command: node sdkdo examples Node version is 12.18.4 Splunk Enterprise version is  8.11 I tested the splunkd at localhost:8089 is up and running... See more...
Running Splunk SDK examples on WIndows 10 with the command: node sdkdo examples Node version is 12.18.4 Splunk Enterprise version is  8.11 I tested the splunkd at localhost:8089 is up and running. Is this an obsolete SDK? Copyright is 2011-2012 The URL where the error occurs is: http://localhost:6969/proxy/services/auth/login?output_mode=json The documentation suggests to use a reverse proxy in the case of production, so in case of development the reverse proxy at /proxy seems build in. Docs are nog clear about that.      
Hi Splunkers ,   Initially I have created on field extraction based on source type in Splunk. After that I wanted to modify the field extraction. When I modified that the new field extraction was n... See more...
Hi Splunkers ,   Initially I have created on field extraction based on source type in Splunk. After that I wanted to modify the field extraction. When I modified that the new field extraction was not working , old field extractions only working. So I deleted that to create a new one. After deleted the field extraction also , I can see the old fields are showing in Splunk. Now I am not able to over write this also. I have tried search level and indexer level extraction , nothing is working. I am only getting the old fields are extracting now. I need to remove the old fields and want to create a new one. Can anyone help one this case. Thanks in Advance !
Hi! I have completed both, Splunk system administrator and Splunk data administrator. In order to get a certification, as far as I know, I need to do both Splunk user and Splunk power user exams. Ne... See more...
Hi! I have completed both, Splunk system administrator and Splunk data administrator. In order to get a certification, as far as I know, I need to do both Splunk user and Splunk power user exams. Next, I should do the Splunk administrator exam. However, I can´t find this exam. Am I missing something? Thanks in advance!
Hello everyone, I've created a dedicated dashboard that contains a table, which displays the latest update of the saved searches, their status, user, but also the value from the Cron Schedule, to be... See more...
Hello everyone, I've created a dedicated dashboard that contains a table, which displays the latest update of the saved searches, their status, user, but also the value from the Cron Schedule, to be able to have a daily overview, like a health monitor of the saved searches. Would it be possible to change the Cron schedule value directly from my dashboard, without having to run from the dedicated page where all the saved searches are located? Thank you!
I have to forward the data from my single instance indexer to another system i.e indexer and third party system. I have tried the below configuration but I am receiving only the local system data i.... See more...
I have to forward the data from my single instance indexer to another system i.e indexer and third party system. I have tried the below configuration but I am receiving only the local system data i.e /var/log/cron as mentioned in input, how can I get the all hosts file that I configured on my index system. i .e log files of system1 ,system 2 etc. [root@splunkvm]# cd /opt/splunk/etc/system/local [root@splunkvm local]# cat inputs.conf [splunktcp://9997] connection_host = ip [monitor:///var/log/cron] disabled = false #_INDEX_AND_FORWARD_ROUTING=local index = index2 sourcetype = linux_logs _TCP_ROUTING = indexer [root@splunkvm local]# cat props.conf [source::/var/log/cron] TRANSFORMS-routing=indexer [root@splunkvm local]# cat transforms.conf [indexer] REGEX= . DEST_KEY=_TCP_ROUTING FORMAT=thirdindexer [root@splunkvm local]# cat outputs.conf [tcpout] indexAndForward = 1 [tcpout:thirdindexer] server = 192.168.x.x:9997 [root@splunkvm local]#
Hi,  Here is my raw data :  ID, Version, Date, Status 10874381,1,2020-01-15T08:36:00Z,New 10874381,1,2020-01-15T08:46:00Z,Completed - Action Performed 14688643,1,2016-10-06T06:30:00Z,New 14688643,1... See more...
Hi,  Here is my raw data :  ID, Version, Date, Status 10874381,1,2020-01-15T08:36:00Z,New 10874381,1,2020-01-15T08:46:00Z,Completed - Action Performed 14688643,1,2016-10-06T06:30:00Z,New 14688643,1,2016-10-07T08:32:00Z,Investigating 14688643,1,2016-10-24T15:10:00Z,Completed - Nothing Found I need to create another field for adding informations to this data. To do that i need to create couple of data.  Record Number | Status 1 | Status 2 | Result 10874384 | New | Completed - Action Perfomed | Completed Actions 14688643 | New | Investigating | Work 14688643 | Investigating| Completed - Nothing Found | Completed I can not know in advance how many status could be by id (maybe 1, 2, 7 or more). I do not know how to create couple with two different event in splunk.  Regards, Clement
Hello to everyone, as in the title, we'd need to run scripts for putting Splunk search heads in manual detention (but also running other management CLI commands), and we're looking for a way to do t... See more...
Hello to everyone, as in the title, we'd need to run scripts for putting Splunk search heads in manual detention (but also running other management CLI commands), and we're looking for a way to do that without writing admin's password in plain text inside the command, or leaving it written in user's history. Is there any way to do that? The only option I found right now is using -auth parameter, while instead the use of rest endpoint is not very clear and I'm not sure it fulfills my requests. Any help? Thanks in advance!