All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tshah5  I noticed you have ServiceClass=com.mongodb.jdbc.MongoDriveri Is that a typo "i" on the end of that?  Let us know if you still experience the issue after updating that. Please let me ... See more...
Hi @tshah5  I noticed you have ServiceClass=com.mongodb.jdbc.MongoDriveri Is that a typo "i" on the end of that?  Let us know if you still experience the issue after updating that. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @uagraw01  Some interesting info at https://docs.splunk.com/Documentation/Splunk/latest/data/Specifyinputpathswithwildcards if you havent already seen it. On Windows, if you specify the [monitor... See more...
Hi @uagraw01  Some interesting info at https://docs.splunk.com/Documentation/Splunk/latest/data/Specifyinputpathswithwildcards if you havent already seen it. On Windows, if you specify the [monitor://C:\Windows\foo\bar*.log] stanza in the inputs.conf file, Splunk Enterprise translates the path into this: [monitor://C:\Windows\foo\] whitelist = bar[^\\]*\.log$ In Windows, allow list and deny list rules don't support regular expressions that include backslashes. Use two backslashes (\\) to escape wildcards. This means [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\*.csv] becomes [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\] whitelist = [^\\]*\.csv$ Im wondering if this whitelist is being overwritten somehow, have you specified any whitelist? It might be worth trying the following input to see if this works, basically explicitly setting the whitelist to what its expecting. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\] whitelist = [^\\]*\.csv$ Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Very helpful  it is set to stream XML so I guess that is the issue and I need to either find a way to deal with it or modify the setting which as you mentioned looks easier said than done.   
A year after, well, better late than never unfortunately, I didn't found a solution to this case, so I just moved on  
This is likely due to permissions. The sc_admin role has the permissions. I can probably be handed out more granularly too.
I have installed splunk dbx forwarder in 1 of my VM. Now when I am trying to create connection with MongoDB, I am getting this error (Our MongoDB uses certs and key for authentication and not user... See more...
I have installed splunk dbx forwarder in 1 of my VM. Now when I am trying to create connection with MongoDB, I am getting this error (Our MongoDB uses certs and key for authentication and not username and password): No suitable driver found for jdbc:mongo://<host>:<port>/?authMechanism=MONGODB-X509&authSource=$external&tls=true&tlsCertificateKeyFile=<path to cert key pair>&tlsCAFile=<path to ca cert> Diagnosis: No compatible drivers were found in the 'drivers' directory. Possible resolution: Copy the appropriate JDBC driver for the database you are connecting to in the 'drivers' directory.   Splunk DBX Add-on for MongoDB : 1.2.0 List of Mongo drivers tried:  mongodb-driver-core-4.10.2.jar  mongojdbc4.8.3.jar  splunk-mongodb-jdbc-1.2.0.jar mongodb-driver-sync-4.10.2.jar  ojdbc8.jar          UnityJDBC_Trial_Install.jar mongodb-jdbc-2.2.2-all.jar   mongo-java-driver-3.12.14.jar   mongodb-driver-core-5.2.1.jar mongodb-driver-sync-5.2.1.jar But getting the same version each time. Splunk_dbx forwarder version:  Splunk 6.4.0 Mongo db version : 7.0.14 ---------------------------------------------------------- This is the db_connection_types.conf: [mongo] displayName = MongoDB jdbcDriverClass = com.mongodb.jdbc.MongoDriver ServiceClass = com.mongodb.jdbc.MongoDriveri jdbcUrlFormat = jdbc:mongo://<host:port>,<host:port>,<host:port>/?authMechanism=MONGODB-X509&authSource=$external&tls=true&tlsCAFile=<path to ca file>&tlsCertificateKeyFile=<path to cert and key file> useConnectionPool = false port = 10924 ssl = true sslMode = requireSSL sslCertificatePath = <path to file> sslCertificateKeyPath = <path to file> sslAllowInvalidHostnames = false authSource = $external tlsCipherSuite = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"    
Basically we are taking same cred in all 5 data input. So I want to combine them and segregate using performance and inventory data using 2 different time intervals. Yes existing 5 inputs are Py... See more...
Basically we are taking same cred in all 5 data input. So I want to combine them and segregate using performance and inventory data using 2 different time intervals. Yes existing 5 inputs are Python based modinputs. This in our custom app.  
Yes, that's the expected behavior. Instead, after entering the cert and key info, I'm redirected to a 404 error page (where it's supposed to display the input page.) thanks for the response.
Hi @luizlimapg Thank you for the response. Upon launching the app for the first time, I got prompted to enter the cert and private key, which I did. After this process, it is supposed to take me t... See more...
Hi @luizlimapg Thank you for the response. Upon launching the app for the first time, I got prompted to enter the cert and private key, which I did. After this process, it is supposed to take me to an input page so I can fill in the rest of the information generated on the CyberArk side. However, the Input page is showing a 404 Error stead. I have removed and reinstalled this app a few times with no success. The server I'm having this issue is running Splunk Enterprise version 9.3.2. I installed this app on an older version of Splunk Enterprise, version 9.2.3, and got the expected inputs screen. So, I'm wondering if it's a versioning info. I don't want to downgrade Splunk Enterprise to test this. I plan to upgrade the problematic server to 9.4.1 later anyway (for other reasons too.) Any more thoughts on this? Thanks again.
Hi @stei-f  Its very odd that this would only affect the SH, especially as any outbound connection from the monitoring console shouldnt be impacted by the change to the MC Server name. From Monitor... See more...
Hi @stei-f  Its very odd that this would only affect the SH, especially as any outbound connection from the monitoring console shouldnt be impacted by the change to the MC Server name. From Monitoring Console, if you go to Settings->General Setup - What does this screen look like? Do you see the remote SHs in there?   
values() sorts (and dedups) - use the list() function (which neither sorts nor dedups) |makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2931"|eval key=2931|eval keyword="DK-... See more...
values() sorts (and dedups) - use the list() function (which neither sorts nor dedups) |makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2931"|eval key=2931|eval keyword="DK-BAL-AP-00613" |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2932"|eval key=2932|eval keyword="DK-Z13-SW-00002"] |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2933"|eval key=2933|eval keyword="DK-BAL-AP-00847"] | stats list(key) as key list(keyword) as keyword list(doc_no) as doc_no by token_id | eval row=mvrange(0,mvcount(doc_no))| mvexpand row| foreach doc_no keyword key [| eval <<FIELD>>=mvindex(<<FIELD>>,row)]|fields - row  
Hi @KJ10  Can I ask, why are you looking to consolidate the inputs? I presume the existing 5 inputs are Python based modinputs? Is this in a custom app or something from Splunkbase? Let me know an... See more...
Hi @KJ10  Can I ask, why are you looking to consolidate the inputs? I presume the existing 5 inputs are Python based modinputs? Is this in a custom app or something from Splunkbase? Let me know and I will see if I can work out how best to help. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi Team, How to combine multiple data input into one, basically I am having 5 different data inputs where I am taking same data from User. How to combine all data input into one data input. I want ... See more...
Hi Team, How to combine multiple data input into one, basically I am having 5 different data inputs where I am taking same data from User. How to combine all data input into one data input. I want One data input where I will internally run 2 different data type with different polling interval. Is this possible with python SDK and How?     Different polling intervals for “performance” and “inventory” data
split function proving error.
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been ... See more...
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been running like this for a some time. But this is introducing issues with license monitoring in Montioring Console. To eliminate this issue and make this Splunk instance to comply to other existing instances, i tried to simply change the serverName in server.conf to the hostname and restarting the Splunk service. Splunk service is starting without complains, but the Monitoring Console reports that suddenly all the SearchHeads are unreachable. Querying the Searchheads for shcluster-status, results in errors. Reverting back to the old name and restarting, fixes that SearchHead unreachable issue and status. This License Manager server has following roles:  * License manager * (Monitoring Console) * Manager Node I do not see any connection on why this change is affecting Searchheads. Indexers are fine. Deployer is a different server. I found documented issues (for this kind of change) for Indexers and the Monitoring Console itself or that it can have side affects for the Deployment Server, but no real hit on Searchheads/SHC. As i do not have permanent access to this instance. I have to prepare kind of a remediation plan or at least analysis. I'm searching for hints where I can start with my investigation. Maybe someone had successfully changed a License Master name. Hoping that I'm missing something obvious. Thanks
Hi @elcuchi , Ok, I use basic auth instead of OAUTH so different scenario, OAUTH was not available on our first tested TA versions and we never moved away from it (which I should prioritize now). Di... See more...
Hi @elcuchi , Ok, I use basic auth instead of OAUTH so different scenario, OAUTH was not available on our first tested TA versions and we never moved away from it (which I should prioritize now). Did you test basic or that is not an option? Thing is, for basic auth: Whenever you configure the ServiceNow account in the TA, you'll have to pass that account as parameter for the ServiceNow action commands OR reference it in the alert action (it is the first field it asks you to fulfill). That is the account the TA will use to open the REST connection with ServiceNow and push the data there (either event or incident). AFAIK, there is no configuration on the TA that uses the actual Splunk logged in user in the authentication context to ServiceNow to trigger those actions. Behind the scenes, every communication is done via the account configured in the TA, at least this is how it works for me while using this TA for the past 4-5 years.   So, question: How are you testing this? (Based in your "when we test the creation of an incident from splunk interface" statement) For OAUTH it may be different, but according to the documentation I don't think it actually is. Documentation says that Oauth requires UI access to SNOW instance, which you mentioned you don't have: OAuth Authentication configuration requires UI access to your ServiceNow Instance. User roles that do not have UI access will not be able to configure their ServiceNow account to use OAuth.   If this is using the person logged in to access ServiceNow instead of using whatever OAUTH config, it makes no sense for the TA to ask clientID and clienteSecret as the main purpose for those is to authenticate.
The output of the values and list functions are always in lexicographical order.  That destroys any relationship that might exist between/among fields. The solution is to combine related fields into... See more...
The output of the values and list functions are always in lexicographical order.  That destroys any relationship that might exist between/among fields. The solution is to combine related fields into a single field before stats and then break them apart again afterwards. | eval tuple = mvzip(keyword, doc_no) | stats values(tuple) as tuple by token_id | eval pairs = split(tuple, ",") | eval keyword = mvindex(pairs,0), doc_no = mvindex(pairs, 1) | fields - tuple, pairs  
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: ... See more...
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: |makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2931"|eval key=2931|eval keyword="DK-BAL-AP-00613" |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2932"|eval key=2932|eval keyword="DK-Z13-SW-00002"] |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2933"|eval key=2933|eval keyword="DK-BAL-AP-00847"] | stats values(key) as key values(keyword) as keyword values(doc_no) as doc_no by token_id | eval row=mvrange(0,mvcount(doc_no))| mvexpand row| foreach doc_no keyword key [| eval <<FIELD>>=mvindex(<<FIELD>>,row)]|fields - row Search Result output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-BAL-AP-00847 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-Z13-SW-00002 GSSAGGOS_QA-2933         Expected Output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-Z13-SW-00002 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-BAL-AP-00847 GSSAGGOS_QA-2933
OK. You should have entries higher up regarding your wildcarded entries. They will be shown under Monitored directories. And inputstatus should show you the files with their status (where the input... See more...
OK. You should have entries higher up regarding your wildcarded entries. They will be shown under Monitored directories. And inputstatus should show you the files with their status (where the input is or why are not ingested). On linux you might just do | grep -C 10 BaptoEvents to limit the output dump only to relevant entries but since you're on windows, you have to use your PS-fu or cmd-fu.
@nieminej  I'm uncertain about this, please open a Splunk support ticket to investigate the issue further.