All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm a beginner, can you be more specific? I'm having the same problem. I'm looking forward to your reply.
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such file... See more...
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such filelog/mule-logs-volume1: include: - /daas-splunk-otel/*/*/dla*.log start_at: beginning filelog/mule-logs-volume2: include: - /daas-splunk-otel/*/dla*.log start_at: beginning and remove all the router stuff 
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collectio... See more...
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collection finished yet.  However, you were right that the error I've seen has something to do with the validation process. And I'm now trying to make heads and tails from the _internal logs as suggested by splunk (which read for example that the props.conf file of the new app is missing, which indeed it is because I haven't finished setting it up yet.)  I will update on potential findings, once I've combed through the logs and tried to remedy the missing files. 
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that... See more...
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that yet 3) Discrepancy: see other replies to Guiseppe 4) Time parsing: I have added some samples below - as the time formats are consistant over the other events... 5) So far there are no rules on the HFs
Question in the title. Thanks in advance!
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened file... See more...
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened files and so on)? 2. Are other files from the same UF (the typical candidates for cross-checking would be UF's own logs) getting ingested properly? 3. How did you verify the discrepancy between those numbers? 4. Are your time parsing rules properly set up? That can heavily influence _where_ (or rather "when") the events are indexed. So you might be getting the events ingested "properly" but you might just not be seeing them while searching. 5. Do you have any rules (props/transforms, ingest actions) on your HFs/indexers that filter the events (or move them to other indexes). There are many things that could affect your ingestion process.
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and ne... See more...
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and needs the below: "Enter the login settings for your Splunk.com account. This information is required for the app precertification process" You normally get this via your sales process.  For the proxy part, it could be incorrect credentials I don’t think it’s a cert issue but could be wrong.   There is a section on the AOB for where self-signed certs should go, but I think this is  red herring https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/ConfigureDataCollection
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in di... See more...
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in distsearch.conf and restart the SH. [replicationDenyList] MSbin = E:\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanz... See more...
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanza ignoreOlderThan = 1d Ciao. Giuseppe
A bit late to the party But seriously - getting the size of the index is one thing but before that we need to define what we mean by that. Index can be measured by many different parameters. 1.... See more...
A bit late to the party But seriously - getting the size of the index is one thing but before that we need to define what we mean by that. Index can be measured by many different parameters. 1. Cumulative size of all indexed events (that's what license usage counts as well) 2. Size of raw event files (compressed or not) 3. Cumulative size of everything related to just events (raw data, metadata) 4. Cumulative size of data regarding events as well as summaries created for given index. 5. Any of the above points but expressed not in terms of file sizes but in terms of usage of underlying storage (as in block-aligned or similar).
Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name ... See more...
Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name of the day. And the log files have the name 1000005847456.log. For example, today’s logs 23 April 2024 are stored in the 2024-04-23/ directory.  For now, I have this input.conf file : [monitor:///data/logs/.../100000*.log] disabled=false sourcetype=log4j host=PC followTail=0 index=test_wild  When I launch the Universal Forwarder, it starts listing all files in/data/logs/.../ . And it also starts to send the data in the log directory as of 4 days ago. I am not looking to retrieve the old log data but the log data of today. I don’t understand this behavior of the Universal Forwarder. Could someone help me? 
Hi @Egyas, I just run into the same issue trying to upgrade a Splunk UF 9.1.2 -> 9.2.1 installed on a server with a Splunk Enterprise instance (just upgraded to 9.2.1). Did you find any workaround/... See more...
Hi @Egyas, I just run into the same issue trying to upgrade a Splunk UF 9.1.2 -> 9.2.1 installed on a server with a Splunk Enterprise instance (just upgraded to 9.2.1). Did you find any workaround/solution except removing one of them? Thanks in advance!
So you want to use a checkbox and not a multiselect. Both are different in splunk context. Here is the updated one. You can leave the checkbox and just filter in the text box or you can select the c... See more...
So you want to use a checkbox and not a multiselect. Both are different in splunk context. Here is the updated one. You can leave the checkbox and just filter in the text box or you can select the check box and the filter <form version="1.1" theme="light"> <label>CheckBox_Text</label> <fieldset submitButton="false"> <input type="checkbox" token="exclude" searchWhenChanged="true" id="checkbox"> <label>Select to exclude</label> <fieldForLabel>Project</fieldForLabel> <fieldForValue>Project</fieldForValue> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <prefix>AND NOT Project IN (</prefix> <suffix>)</suffix> <default>""</default> </input> <input type="text" token="text_filter" searchWhenChanged="true"> <label>Text to Filter</label> <default>*</default> </input> </fieldset> <row> <panel> <table> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error") |where NOT like (Record,"%$text_filter$%") $exclude$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
There are no packet errors on the UF ip -s link show ens192 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:bb:07:5... See more...
There are no packet errors on the UF ip -s link show ens192 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:bb:07:59 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 13730859826 7324421 0 0 0 358 TX: bytes packets errors dropped carrier collsns 1976804117 6163908 0 0 0 0  
Hello, thanks for your help.  Until now were using a single deployment of splunk (indexer, search head and data inputs) on the same box.  Now we have just started to split the roles by deploying a ... See more...
Hello, thanks for your help.  Until now were using a single deployment of splunk (indexer, search head and data inputs) on the same box.  Now we have just started to split the roles by deploying a new search head.  By the search is not working I meant that the service is up and running, we can log on it but the searches are not running. We got this message:  Unable to distribute to peer named [indexer_splunk_instancename] at uri https://[indexer_ip]:8089 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available.  On the indexer, on splunkd.log we got these messages:  File length is greater than 260, File creation may fail. After reading the doc, I saw the  app is supported on the indexers but it is not required. If we move this application to one heavy forwarder. It will not be included on the replication bundle between SH and Indexer?  
Some sample logs: MD Core Data [INFO ] 2024.04.23 01:02:36.169: (common.update) Metadescriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/metadescriptor', download... See more...
Some sample logs: MD Core Data [INFO ] 2024.04.23 01:02:36.169: (common.update) Metadescriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/metadescriptor', downloadlink='https://xxx.domain.tld:9000/console/core/metadescriptor?version=5.8.0&deployment=MSCW6YaXaCaj1y1gv23U4JxzRHFhNUZLENEX&key=2041ed80a6043bf436fc7be518df4a13&serial=1' [msgid: 622] [INFO ] 2024.04.23 01:02:36.371: (common.update) Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/bitdefender_1_windows/bitdefender_1_windows-database-1713819646-1713819720.yml' [msgid: 618] [INFO ] 2024.04.23 01:02:36.442: (common.update) Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml' [msgid: 2320] [INFO ] 2024.04.23 01:02:36.454: (common.update) Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/eset_1_windows/eset_1_windows-database-1713821049-1713821161.yml' [msgid: 618] [INFO ] 2024.04.23 01:02:36.459: (common.update) Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml' [msgid: 2320] [INFO ] 2024.04.23 01:02:51.383: (common.update) Package successfully downloaded, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz' [msgid: 671] [INFO ] 2024.04.23 01:02:51.383: (common.update) Checksum validation of package content is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', packageName='bitdefender_1_windows-database-1713819646.zip', type='database', filesChecked='951' [msgid: 2321] [INFO ] 2024.04.23 01:02:57.775: (common.update) Package successfully downloaded, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y' [msgid: 671] [INFO ] 2024.04.23 01:02:57.775: (common.update) Checksum validation of package content is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml', packageName='eset_1_windows-database-1713821049.zip', type='database', filesChecked='36' [msgid: 2321] [INFO ] 2024.04.23 01:03:00.597: (engines) Default parallel count set for engine, engineId='eset_1_windows', parallelcount='20' [msgid: 4602] [INFO ] 2024.04.23 01:03:00.718: (engines) Accepting local socket, engine_id='eset_1_windows', socket='\\.\pipe\C:/Windows/Temp/ometascan/9e14Ds_13680', socketDescriptor='5924' [msgid: 4547] [INFO ] 2024.04.23 01:03:01.415: (engines) Default parallel count set for engine, engineId='bitdefender_1_windows', parallelcount='20' [msgid: 4602] [INFO ] 2024.04.23 01:03:01.512: (engines) Accepting local socket, engine_id='bitdefender_1_windows', socket='\\.\pipe\C:/Windows/Temp/ometascan/yC8oEL_14344', socketDescriptor='7852' [msgid: 4547] [INFO ] 2024.04.23 01:03:04.056: (engines) Try to swap engineprocess log, engine_id='eset_1_windows' [msgid: 5594] [INFO ] 2024.04.23 01:03:10.902: (common.update) Successfully verified product [msgid: 4696] [INFO ] 2024.04.23 01:05:03.731: (engines) Try to swap engineprocess log, engine_id='bitdefender_1_windows' [msgid: 5594] Syslog Data Apr 23 01:02:36 10.178.102.75 MSCW[2456] Metadescriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/metadescriptor', downloadlink='https://xxx.domain.tld:9000/console/core/metadescriptor?version=5.8.0&deployment=MSCW6YaXaCaj1y1gv23U4JxzRHFhNUZLENEX&key=2041ed80a6043bf436fc7be518df4a13&serial=1' [msgid: 622] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/bitdefender_1_windows/bitdefender_1_windows-database-1713819646-1713819720.yml' [msgid: 618] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml' [msgid: 2320] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/eset_1_windows/eset_1_windows-database-1713821049-1713821161.yml' [msgid: 618] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml' [msgid: 2320] Apr 23 01:02:51 10.178.102.75 MSCW[2456] Package successfully downloaded, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz' [msgid: 671] Apr 23 01:02:51 10.178.102.75 MSCW[2456] Checksum validation of package content is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', packageName='bitdefender_1_windows-database-1713819646.zip', type='database', filesChecked='951' [msgid: 2321] Apr 23 01:02:57 10.178.102.75 MSCW[2456] Package successfully downloaded, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y' [msgid: 671] Apr 23 01:02:57 10.178.102.75 MSCW[2456] Checksum validation of package content is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml', packageName='eset_1_windows-database-1713821049.zip', type='database', filesChecked='36' [msgid: 2321] Apr 23 01:03:00 10.178.102.75 MSCW[2456] Default parallel count set for engine, engineId='eset_1_windows', parallelcount='20' [msgid: 4602] Apr 23 01:03:00 10.178.102.75 MSCW[2456] Accepting local socket, engine_id='eset_1_windows', socket='\\.\pipe\C:/Windows/Temp/ometascan/9e14Ds_13680', socketDescriptor='5924' [msgid: 4547] Apr 23 01:03:01 10.178.102.75 MSCW[2456] Default parallel count set for engine, engineId='bitdefender_1_windows', parallelcount='20' [msgid: 4602] Apr 23 01:03:01 10.178.102.75 MSCW[2456] Accepting local socket, engine_id='bitdefender_1_windows', socket='\\.\pipe\C:/Windows/Temp/ometascan/yC8oEL_14344', socketDescriptor='7852' [msgid: 4547] Apr 23 01:03:04 10.178.102.75 MSCW[2456] Try to swap engineprocess log, engine_id='eset_1_windows' [msgid: 5594] Apr 23 01:03:10 10.178.102.75 MSCW[2456] Successfully verified product [msgid: 4696] Apr 23 01:05:03 10.178.102.75 MSCW[2456] Try to swap engineprocess log, engine_id='bitdefender_1_windows' [msgid: 5594] Splunk Apr 23 01:02:36 10.178.102.75 MSCW[2456] Metadescriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/metadescriptor', downloadlink='https://xxx.domain.tld:9000/console/core/metadescriptor?version=5.8.0&deployment=MSCW6YaXaCaj1y1gv23U4JxzRHFhNUZLENEX&key=2041ed80a6043bf436fc7be518df4a13&serial=1' [msgid: 622] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/bitdefender_1_windows/bitdefender_1_windows-database-1713819646-1713819720.yml' [msgid: 618] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml' [msgid: 2320] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Package descriptor received, file_name='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml', url='https://xxx.domain.tld:9000/console/core/package/eset_1_windows/eset_1_windows-database-1713821049-1713821161.yml' [msgid: 618] Apr 23 01:02:36 10.178.102.75 MSCW[2456] Checksum and digital signature validation of package descriptor is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/eset_1_windows_WNDq4Y/packagedescriptor.yml' [msgid: 2320] Apr 23 01:02:51 10.178.102.75 MSCW[2456] Package successfully downloaded, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz' [msgid: 671] Apr 23 01:02:51 10.178.102.75 MSCW[2456] Checksum validation of package content is ok, packageDir='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz', descriptor='C:/Program Files/OPSWAT/MetaDefender Data/data/updates/db/bitdefender_1_windows_GiYkXz/packagedescriptor.yml', packageName='bitdefender_1_windows-database-1713819646.zip', type='database', filesChecked='951' [msgid: 2321] Apr 23 01:05:03 10.178.102.75 MSCW[2456] Try to swap engineprocess log, engine_id='bitdefender_1_windows' [msgid: 5594] There is no duplicate data in the logs. It happens not only in the first 12 days of the month as you can see in the example logs above...
Just because events have multiple indexes and source types does not mean you can't use stats to correlate events in the events pipeline. In addition to @richgalloway's request, please also share some... See more...
Just because events have multiple indexes and source types does not mean you can't use stats to correlate events in the events pipeline. In addition to @richgalloway's request, please also share some sample representative anonymised events showing how you would like these events to correlated.
Your first error is deploying Splunk on Windows.   See https://community.splunk.com/t5/Getting-Data-In/What-are-the-pain-points-with-deploying-your-Splunk-architecture/m-p/650011 Please elaborate on... See more...
Your first error is deploying Splunk on Windows.   See https://community.splunk.com/t5/Getting-Data-In/What-are-the-pain-points-with-deploying-your-Splunk-architecture/m-p/650011 Please elaborate on "the search head is not working".  What about it is not working?  An error on an indexer does not necessarily mean there's a problem with the SH. One workaround is to rename the TA so it resides in a directory with a shorter name (by at least 8 characters).  Of course, you will have to maintain that forever.
First you need to identify what the format is it JSON/syslog/CEF etc is supposed to be. Then you need to install the TA which you have on a HF and in cloud SH.  Then you need to ensure the TA on ... See more...
First you need to identify what the format is it JSON/syslog/CEF etc is supposed to be. Then you need to install the TA which you have on a HF and in cloud SH.  Then you need to ensure the TA on the HF has been configured with the correct options, you will most likely need to ensure the Guardicore system is configured for the format your require,  or the default options - Speak to the Guardicore admin.  From the Splunk base there appears not to be any detailed documentation but it does state the TA uses REST API and processes events received from the Syslog exporter. So, sounds like the TA app will have the config options to pull data.
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as... See more...
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as Input Method to make an API-Call to get my data into Splunk. That's the goal at least.   The VM communicates with the Internet just fine (even if via proxy) and my python script gets the data from the API-Endpoint. However, if I try to enter the proxy credentials from my VM into the Configuration of the Add-on Builder I get the following Error: "There was a problem connecting to the App Certification service. The service might not be available at this time, or you might need to verify your proxy settings and try again."  Now, assuming that I did not mess up the proxy credentials, my next best bet would be that I need to give my Splunk environment a certificate to adequately communicate with the proxy. So we finally reach my question:  Where would I need to place such a certificate file in the directory structure, so that the the Splunk add-on app can find it?