All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @pavithra  adding your information to the below inputs.conf: [monitor://C:\Users\_svcAPACCommVault01\OneDrive - Lendlease\Desktop\csv\*.csv] disabled = 0 sourcetype = backup index = acn_lendleas... See more...
Hi @pavithra  adding your information to the below inputs.conf: [monitor://C:\Users\_svcAPACCommVault01\OneDrive - Lendlease\Desktop\csv\*.csv] disabled = 0 sourcetype = backup index = acn_lendlease_commvault host = your_host Ciao. Giuseppe
1. The thread is relatively old so you might not get many responses. It's usually better to post a new question (linking to old thread for reference) than to dig up a several-years-old thread. 2. Th... See more...
1. The thread is relatively old so you might not get many responses. It's usually better to post a new question (linking to old thread for reference) than to dig up a several-years-old thread. 2. The .xsl file is meant to be applied on the Vault side - installed to ENE, not used on Splunk's side. https://docs.splunk.com/Documentation/AddOns/released/CyberArk/Setup
Are you trying to set it up in Cloud or on-prem? (the section of Answers where you posted it suggests Cloud but it's better to be sure).
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is ... See more...
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is to configure the Universal Forwarder to collect these logs and forward them to a central Splunk server. Given that the Splunk documentation indicates that the MS Exchange App is end-of-life (EOL), is it necessary to use an add-on? The documentation suggests creating GPO policies and making other changes. However, in IBM QRadar, the process is simpler: you install the WinCollect agent, specify the paths for MSG Tracking, SMTP, and OWA logs, and the agent collects and forwards the logs to the QRadar Console. The Auto Discovery feature in QRadar then creates the log source automatically. Is there a simpler and more straightforward method to collect these logs using the Splunk Universal Forwarder? Thank you in advance for your assistance.
HF on its own does not have any "HA" or "clustering" feature. There are possibilities to manually craft some active-passive solutions but they require quite a lot of "external architecting" - doing a... See more...
HF on its own does not have any "HA" or "clustering" feature. There are possibilities to manually craft some active-passive solutions but they require quite a lot of "external architecting" - doing a lot of stuff semi-manually on the OS level. I think there was a .conf presentation about it a few years ago but I can't find it.
1. This is not a Professional Support service. People do have their lives and respond when they have some spare time. 2. For typical apache httpd logs there are two built-in sourcetypes - access_com... See more...
1. This is not a Professional Support service. People do have their lives and respond when they have some spare time. 2. For typical apache httpd logs there are two built-in sourcetypes - access_combined and apache_error.
Dashboard studio does not allow for custom JS and does not have any in-built capability to do it on its own. You could try to do something like this using SimpleXML dashboard and some custom JS scrip... See more...
Dashboard studio does not allow for custom JS and does not have any in-built capability to do it on its own. You could try to do something like this using SimpleXML dashboard and some custom JS scripting on your side.
Forwarders send requests to the Cluster Manager for a list of available indexers.  The requests are sent at adjustable intervals.  See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/index... See more...
Forwarders send requests to the Cluster Manager for a list of available indexers.  The requests are sent at adjustable intervals.  See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/indexerdiscovery#How_indexer_discovery_works for details.
You can compose your own download link based on the repo's contents and the existing Solaris link. But the general rule for obtaining old versions is "call the support for it".
You cannot append to a gzipped file. As simple as that.
What have you tried so far and how does it fail to meet your expectations?
how and when does the UF check in with the Cluster Node to see what indexers are available.   thank You, spent some time on searching, No joy.
Hi , Thanks for the response! Please find the details below filename -BackupJobSummaryReport_2024-07-07-08-00-06 ( it will be in incremental order based on date) path - C:\Users\_svcAPACCommVault... See more...
Hi , Thanks for the response! Please find the details below filename -BackupJobSummaryReport_2024-07-07-08-00-06 ( it will be in incremental order based on date) path - C:\Users\_svcAPACCommVault01\OneDrive - Lendlease\Desktop\csv\*.csv column separator,-   Client,Host Name,Total Jobs,Completed,Completed with errors,Completed with warnings,Killed,Unsuccessful,Running,Delayed,No Run,No Schedule,Committed,Size of Application,Compression Rate,Data Written,(Space Saving Percentage),Start Time,End Time,Protected Objects,Failed Objects,Failed Folders sourcetype - backup index - acn_lendlease_commvault
I guess that might be the only way thanks a lot
thank you so much for the help but when I tried the link it doesn't have solaris version in them
Hi Team, I have many(10+) large csv lookup files (200MB) being referred in multiple places. To improve optimisation, moved them into compressed files (*.csv.gz) and made changes in existing files. ... See more...
Hi Team, I have many(10+) large csv lookup files (200MB) being referred in multiple places. To improve optimisation, moved them into compressed files (*.csv.gz) and made changes in existing files. In some cases, we need to retain the existing file data. Thus said, | outputlookup *.csv.gz ```will fail to retain old data``` Hence Planning to do as follows, | inputlookup "old_file.csv.gz" | outputlookup "temp_file.csv" ```taking a backup ``` | Run a new search | outputlookup append=t "temp_file.csv" ```append new search results to the temp file``` | inputlookup "temp_file.csv" | outputlookup "old_file.csv.gz" | makeresults | where false() | outputlookup create_empty=true "temp_file.csv"   And this needs to be done on multiple places :(. Is there any better way to perform this without creating/clearing temp files?
Hi Team, I am using SplunkSDK version 1.12.1 (splunk-sdk@1.12.1). We are using oneshotSearch to get splunk query data from the Get api. Please see below code snippet for executeSearch     module.... See more...
Hi Team, I am using SplunkSDK version 1.12.1 (splunk-sdk@1.12.1). We are using oneshotSearch to get splunk query data from the Get api. Please see below code snippet for executeSearch     module.exports.executeSearch = function (query, params, cb) { splunkService.oneshotSearch(query, params, function (err, results) { console.log("Query is : "+query); cb(err, results); }); };     Below code is from where we are calling above      SplunkQuery.executeSearch(splunkSearch, splunkParams, function (err, results) { if (err) { if (err.data && err.data.messages) { Log.error(err.data.messages); } var error = Boom.badRequest(); error.reformat(); error.output.payload = Response.buildResponse(Errors.ERROR_RECORD_RETRIEVAL_FAILURE, []); // return reply(error); throw err; } var events = []; var rawRowIndex = results.fields.indexOf('_raw'); if (results.rows.length == 0 && request.query.id) { var error = Boom.badRequest(); error.reformat(); error.output.payload = Response.buildResponse(Errors.ERROR_INVALID_ID_PARAM, []); return h.response(error); } for (var i = 0; i < results.rows.length; i++) { var splunkRecord = results.rows[i]; Log.info("splunkRecord"+splunkRecord); if (splunkRecord && splunkRecord[rawRowIndex]) { var rawRecord = splunkRecord[rawRowIndex]; events.push(Util.splunkRecordToEvent(JSON.parse(rawRecord.replace(/\nValue of UseDynamoDB = True/g, '')))); } } Log.info("end splunck sear"); Log.info('Splunk search completed, events count:'+events.length); h.response(Response.buildResponse(0, events)); });     I can see the result or events in console with the search count (Splunk Search completed, events count: ) log as well. But I am getting 500 error as response through curl and postman too. What code changes I have to do to get the result data as response. Please suggest. Thank you
Can Anyone help on my request please.
Hi @gcusello, have you any idea what the reason is for (the lack of) this option and do you know if there are plans to adjust this?   
Hi Team, We have designed a studio dashboard like below with 70 KPI panels in the same page, but we want the KPI panels to be splitted and add only 20 panels per page, so that the next panels will g... See more...
Hi Team, We have designed a studio dashboard like below with 70 KPI panels in the same page, but we want the KPI panels to be splitted and add only 20 panels per page, so that the next panels will go to next page. and also we want to design this pages in such a way that they will be moving on to next pages one after the other with a sepecified time span, so that we can select the time span. Time span should be 10 seconds, 20 seconds, 30 seconds like that. The above shown is the kpi dsahboard we have designed. We request you to kindly help us on this.