All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I have 4 indexers in my environment. Using a utility server, I am pushing the bundles to the members using apply cluster bundle. I can see that the master-apps present in the utility ser... See more...
Hi Team, I have 4 indexers in my environment. Using a utility server, I am pushing the bundles to the members using apply cluster bundle. I can see that the master-apps present in the utility server are not getting reflected in only one of the indexers (the other 3 indexers seem fine). I have tried destroying and spinning up the instance, took that indexer offline, out of the cluster, and added back and did cluster bundle apply and removed all slave apps. However, only in that particular cluster slave-apps are not getting copied. Also tried from utility console -> push bundle, still no luck. How the slave-apps will be moved from master-apps when doing apply cluster-bundle, slave-apps directory will be created alongside of indexer as I don't see in my code I am creating it. Any leads on where to check for the next steps/errors/logs? Let me know if you need more info.   Thanks.
Hello Everyone! Currently the result of my query is  below: Input: id                                           URL 101                           https://......-28.../../..../..../..../12304 102... See more...
Hello Everyone! Currently the result of my query is  below: Input: id                                           URL 101                           https://......-28.../../..../..../..../12304 102                           https://......-28.../../..../..../..../34569                                     https://......-02.../../..../..../..../8976                                     https://......-28.../..../..../741256 103                          https://......-06.../..../..../..../5678                                    https://......-04.../../..../..../..../158930    I would like to have the output as below: Output: id                                           URL                                                                       fieldA                  fieldB                       101                           https://......-28.../../..../..../..../12304                        28                      12304 102                           https://......-28.../../..../..../..../34569                        28                      34569 102                           https://......-02.../../..../..../..../8976                           02                      8976   102                           https://......-28.../..../..../741256                               28                      741256 103                           https://......-06.../..../..../..../5678                               06                      5678  103                           https://......-04.../../..../..../..../158930                      04                     158930 I have tried with rex and mvcombine , makemv but not able to achieve the result. I am not sure whether I am using them correctly. Can you please help me to get the output?
We have a large csv file that a user is using with a automatic lookup. The lookup needs only to be stored and searched locally on the single searchhead, but we're getting errors from distributed sear... See more...
We have a large csv file that a user is using with a automatic lookup. The lookup needs only to be stored and searched locally on the single searchhead, but we're getting errors from distributed searchheads about being unable to load the lookup (since it doesn't exist there).   In search, you can add local=true which resolves this issue. Is it possible to add this for automatic lookups? I see nothing mentioning it in docs, and the only other community question about this is from 2012 where they say it's not supported... hoping that's changed since then?...   If not, what other options do I have?
I have a splunk trial version and i am trying pushing aws waf logs through HEC- I have enabled the token perfectly and also tried with several endpoints, but it dint worked. Most of the Splunk docume... See more...
I have a splunk trial version and i am trying pushing aws waf logs through HEC- I have enabled the token perfectly and also tried with several endpoints, but it dint worked. Most of the Splunk documents has a guide for splunk cloud instance on how to enable the endpoints but i could not find any documents which says how to configure the endpoints for splunk enterprise trial. Meanwhile i tried with the all the three way of endpoints <protocol>://<host>:<port>/<endpoint> <protocol>://input-<host>:<port>/<endpoint>  and <protocol>://http-inputs-<host>:<port>/<endpoint>  
Hello everyone! I have recently configured my minemeld modular input to be ingested in Splunk, but I keep getting the following errors: 09-22-2020 12:11:14.101 -0400 ERROR ExecProcessor - message f... See more...
Hello everyone! I have recently configured my minemeld modular input to be ingested in Splunk, but I keep getting the following errors: 09-22-2020 12:11:14.101 -0400 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType’ 09-22-2020 12:11:14.101 -0400 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" indicator_timeout = int(helper.get_arg('indicator_timeout')) * 3600 09-22-2020 12:11:14.101 -0400 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py", line 72, in collect_events 09-22-2020 12:08:43.958 -0400 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType I tried accessing the feed page, and I can access it from the splunk server, has anyone experienced the same issue? Thanks
Hello, I would like to know how forwarders handle rolling logs when their target indexers become unavailable.  Here is a simple scenario: My application creates a log "application.log" At midnigh... See more...
Hello, I would like to know how forwarders handle rolling logs when their target indexers become unavailable.  Here is a simple scenario: My application creates a log "application.log" At midnight, "application.log" gets rolled to "application.backup" and a new "application.log" gets created Assuming my indexer goes down at 11pm and gets restored at 1am the following day, there is 1 hour of log data that will get rolled to "application.backup" and 1 hour of data that is contained in the new "application.log" when the indexer gets restored. My question relating to the above scenario: will the forwarder keep track of the hour's worth of data that was rolled to "application.backup" as well as the hour's worth of data that is written to "application.log" and send it to the indexer once it becomes available? Thank you! Andrew
Hi, I have below scenario where a sample gym has many customers and their accounts. Some are individual and some are Individual plus co-signer. I need to have below name combinations in data extract... See more...
Hi, I have below scenario where a sample gym has many customers and their accounts. Some are individual and some are Individual plus co-signer. I need to have below name combinations in data extracted via regex if possible in new fields respectively as shown in below table.  Where they will be coming as null after extraction, I will just fill them with fillnull or eval. Thanks in-advance!!! Sample:
I have an ADQL that is trying to count the number of times a Customer is viewed. I have a data collector to add this data to Transactions. Here is the query: SELECT segments.userData.CustomerNam... See more...
I have an ADQL that is trying to count the number of times a Customer is viewed. I have a data collector to add this data to Transactions. Here is the query: SELECT segments.userData.CustomerName, segments.userData.CustomerNumber, segments.userData.Agreement, count(segments.userData.CustomerName) FROM transactions WHERE application = "TruckCare-CustomerProfile" AND segments.userData.CustomerName IS NOT NULL But even with the WHERE clause 'CustomerName IS NOT NULL' There is still a line showing up with all nulls. I know there are Transactions that don't have CustomerName, CustomerNumber, or Agreement on them but I am expecting the 'IS NOT NULL' to filter those out.     Any help much appreciated Thanks
Dear Splunkers,  Splunk server certificates on servers with splunk forwarder is expiring. is there a way to upgrade them using deployment servers. We have approx 2000 clients. do we need to upgrade... See more...
Dear Splunkers,  Splunk server certificates on servers with splunk forwarder is expiring. is there a way to upgrade them using deployment servers. We have approx 2000 clients. do we need to upgrade all of them at once? Regards, Abhishek
Hi, I am trying to install .net agent in the servers but I'm getting this message: COR_PrOFILER is set. Please uninstall existing profiler and try again. Press any key to exist. What should ... See more...
Hi, I am trying to install .net agent in the servers but I'm getting this message: COR_PrOFILER is set. Please uninstall existing profiler and try again. Press any key to exist. What should I do next?  
Hello Splunkers, We have all the log collection at s3 .  What would be best option to send logs from s3 to Splunk . I know there is an AWS add-on available to get the data in, but will it be the be... See more...
Hello Splunkers, We have all the log collection at s3 .  What would be best option to send logs from s3 to Splunk . I know there is an AWS add-on available to get the data in, but will it be the best option if we need to assign multiple sourcetypes coming from the same s3 bucket. Thanks in Advance
Hi all, I have been trying to create a Dashboard for searches based on a specific ID number (this is intended to help team members find connected data, who aren't proficient in writing searches). Th... See more...
Hi all, I have been trying to create a Dashboard for searches based on a specific ID number (this is intended to help team members find connected data, who aren't proficient in writing searches). The panels are backed by  some extensive search macros. Anyways, this is a two part question regarding XML development: First, I was wondering if there was a way via XML to check against null return on the search, so that I can hide panels that return nothing, and display only those which return a value from the search? Second, how would one actually go about using XML to hide panels or display panels, in the case that a search returns no results or a valid result?  Thank you.
I am trying to enable machine agent in Solaris server. JAVA verison 11.0. Machine Agent version 20.9.  I am getting following error. I checked JAVA path is correct. everything is according to the do... See more...
I am trying to enable machine agent in Solaris server. JAVA verison 11.0. Machine Agent version 20.9.  I am getting following error. I checked JAVA path is correct. everything is according to the doc. But not able to start agent. any thoughts     ./machine-agent -d -p /opt/appdynamics/machineagent/pidfile Using java executable at /opt/appdynamics/machineagent/jre/bin/java #nohup: /opt/appdynamics/machineagent/jre/bin/java: Invalid argument  
I am using below Code for creating Macro using Rest API from excel file. When I execute it says macros is created without any issue. But it is not created in Splunk Application. Is it the correct pr... See more...
I am using below Code for creating Macro using Rest API from excel file. When I execute it says macros is created without any issue. But it is not created in Splunk Application. Is it the correct process to created Macro.               url : 'https://'+host+':'+config.port+"/servicesNS/nobody/" + app + "/properties/macros/definition?output_mode=json",                     form: {                          "Name":name,                         "Definition":value                     },                     headers :{                         "Content-Type": "application/json",                         "Authorization": auth                     }   Below is used for updating the macro and it is working fine         {             url: "https://" + host + ":" + config.port + "/servicesNS/nobody/" + app + "/properties/macros/" + name + "/definition?output_mode=json",             headers: {                 "Content-Type": "application/x-www-form-urlencoded",                 Accept: "application/json",                 Authorization: auth,             },             form: {                 value,             },
Good Day all,  I would like to find the percentage of devices that has updated. The way I would like to do this is to first search index=main | stats dc(Host_Name). This would give me the total numb... See more...
Good Day all,  I would like to find the percentage of devices that has updated. The way I would like to do this is to first search index=main | stats dc(Host_Name). This would give me the total number of host names that are sending data to splunk. Out of that I would like to find out what percentage of those is updated. I can search index=main update* | stats dc(Host_Name) and it will give the number of devices that have updated. But how will I find the percentage of devices that have updated? Normally it would be (devices updated/total devices)*100. For this search how do we craft the search? Thanks  
When loading a Dashboard page in version 7.3.3 Splunk Enterprise, regardless of application, the css-loader.js JavaScript file is getting interpreted as having a MIME type of text/HTML when it should... See more...
When loading a Dashboard page in version 7.3.3 Splunk Enterprise, regardless of application, the css-loader.js JavaScript file is getting interpreted as having a MIME type of text/HTML when it should have the application/javascript MIME type. This results in the following console error: The resource from "https://......./js/css-loader.js" was blocked due to MIME type mismatch (X-Content-Type-Options: nosniff). This error causes a ripple effect resulting in an incomplete loading of the page. Please advise.
Hello, i have two fields Vers0 and Vers1 given in hexadecimal.  They encode the Software-Version, in the Form: Vers0.Vers1, so e.g. Vers0 = 0f and Vers1 = 10 -->  Version: 15.16 Since i will be n... See more...
Hello, i have two fields Vers0 and Vers1 given in hexadecimal.  They encode the Software-Version, in the Form: Vers0.Vers1, so e.g. Vers0 = 0f and Vers1 = 10 -->  Version: 15.16 Since i will be needing this again down the line, i figured let's make a "function" that given these two fields outputs the resulting Version.   I found the following example online: [dec2hex(1)] args = field_name definition = eval $field_name$ = tostring($field_name$, "hex") iseval = 0   Unfortunately, this is not the format i have access to, i have to  use the splunk tool to make a search macro. However, i do not understand its syntax.  The docu here (click ) did not help at all.   This is my desired "logic" with the search-macro:   And then i use this "function" using the following search   base search giving me fields Vers0 and Vers1| eval version = `eval_version(Vers0, Vers1)`   but this does not lead to success.    Any insights to what i am doing wrong. I apologize for this somewhat poor describtion but splunk really is doing my head in. How can simply things be this complicated ...   Thanks guys  
HI Experts,   Is it possible to create a kind of feedback link or popup at the bottom right of dashboard, so when it is clicked a feedback form will be opened to user. Has any one worked on such re... See more...
HI Experts,   Is it possible to create a kind of feedback link or popup at the bottom right of dashboard, so when it is clicked a feedback form will be opened to user. Has any one worked on such requirement? Note - Dashboard in question is using tabs feature. Thanks
Hi,  For some reason, I am failing to get any network monitoring data into my environment. I can successfully retrieve perfmon, script data, REST, and HEC. But as soon as I create a stanza with [Wi... See more...
Hi,  For some reason, I am failing to get any network monitoring data into my environment. I can successfully retrieve perfmon, script data, REST, and HEC. But as soon as I create a stanza with [WinNetMon://bla] nothing happens.  This is the stanza I am now using. I am actually looking for a different process but this process is always present.   [WinNetMon://lsass] disabled=0 addressFamily=ipv4 direction=inbound;outbound interval=60 protocol=udp;tcp index=uf_process process=lsass packetType=accept;connect;LostPacket   I even tried the minimum.   [WinNetMon://lsass] disabled=0 index=uf_process   The app is deployed with the Windows Deployment server and it lands on the client just nicely.  On the client, I pulled the following from the splunkd.log.    09-22-2020 13:57:29.780 +0200 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-netmon.exe"" splunk-netmon - NetmonStartDriver - StartService failure for splknetdrv! Error = -2144206839 09-22-2020 13:57:29.780 +0200 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-netmon.exe"" splunk-netmon - NetmonAppDoMonitoring: Failed to open monitor device: 0x80320009 09-22-2020 13:57:29.780 +0200 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-netmon.exe"" splunk-netmon - NetmonAppDoMonitoring: Error 0x80320009 occurred during execution   This shows up after the restart of the UF on the client. I can't seem to find the solution to this one?  I tried to change the service the UF is running under from LOCAL SYSTEM ACCOUNT to a named account with local admin rights but it did not make any difference. It almost looks like the Windows client is missing something. This morning I removed the universal forwarder and installed the latest version. Still nothing. 
Hello! I have a table and in that a column consisting of some checkboxes. There is a button which when clicked turns the state of some checkboxes to checked. But the table which is there is spread a... See more...
Hello! I have a table and in that a column consisting of some checkboxes. There is a button which when clicked turns the state of some checkboxes to checked. But the table which is there is spread across multiple pages. There is another button which when clicked submits information (into a lookup file) of all those rows where the checkbox was in checked state.  My problem is the information is getting submitted upto that page no. only. Say if I clicked on Manual Selection button (second button) on page 1, then the information will get written for last four servers only not all the servers across all the pages. Also when I go to page no. 2 and then hit the second button, then the information will get written upto page no. 2.