All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, I have many(10+) large csv lookup files (200MB) being referred in multiple places. To improve optimisation, moved them into compressed files (*.csv.gz) and made changes in existing files. ... See more...
Hi Team, I have many(10+) large csv lookup files (200MB) being referred in multiple places. To improve optimisation, moved them into compressed files (*.csv.gz) and made changes in existing files. In some cases, we need to retain the existing file data. Thus said, | outputlookup *.csv.gz ```will fail to retain old data``` Hence Planning to do as follows, | inputlookup "old_file.csv.gz" | outputlookup "temp_file.csv" ```taking a backup ``` | Run a new search | outputlookup append=t "temp_file.csv" ```append new search results to the temp file``` | inputlookup "temp_file.csv" | outputlookup "old_file.csv.gz" | makeresults | where false() | outputlookup create_empty=true "temp_file.csv"   And this needs to be done on multiple places :(. Is there any better way to perform this without creating/clearing temp files?
Hi Team, I am using SplunkSDK version 1.12.1 (splunk-sdk@1.12.1). We are using oneshotSearch to get splunk query data from the Get api. Please see below code snippet for executeSearch     module.... See more...
Hi Team, I am using SplunkSDK version 1.12.1 (splunk-sdk@1.12.1). We are using oneshotSearch to get splunk query data from the Get api. Please see below code snippet for executeSearch     module.exports.executeSearch = function (query, params, cb) { splunkService.oneshotSearch(query, params, function (err, results) { console.log("Query is : "+query); cb(err, results); }); };     Below code is from where we are calling above      SplunkQuery.executeSearch(splunkSearch, splunkParams, function (err, results) { if (err) { if (err.data && err.data.messages) { Log.error(err.data.messages); } var error = Boom.badRequest(); error.reformat(); error.output.payload = Response.buildResponse(Errors.ERROR_RECORD_RETRIEVAL_FAILURE, []); // return reply(error); throw err; } var events = []; var rawRowIndex = results.fields.indexOf('_raw'); if (results.rows.length == 0 && request.query.id) { var error = Boom.badRequest(); error.reformat(); error.output.payload = Response.buildResponse(Errors.ERROR_INVALID_ID_PARAM, []); return h.response(error); } for (var i = 0; i < results.rows.length; i++) { var splunkRecord = results.rows[i]; Log.info("splunkRecord"+splunkRecord); if (splunkRecord && splunkRecord[rawRowIndex]) { var rawRecord = splunkRecord[rawRowIndex]; events.push(Util.splunkRecordToEvent(JSON.parse(rawRecord.replace(/\nValue of UseDynamoDB = True/g, '')))); } } Log.info("end splunck sear"); Log.info('Splunk search completed, events count:'+events.length); h.response(Response.buildResponse(0, events)); });     I can see the result or events in console with the search count (Splunk Search completed, events count: ) log as well. But I am getting 500 error as response through curl and postman too. What code changes I have to do to get the result data as response. Please suggest. Thank you
Can Anyone help on my request please.
Hi @gcusello, have you any idea what the reason is for (the lack of) this option and do you know if there are plans to adjust this?   
Hi Team, We have designed a studio dashboard like below with 70 KPI panels in the same page, but we want the KPI panels to be splitted and add only 20 panels per page, so that the next panels will g... See more...
Hi Team, We have designed a studio dashboard like below with 70 KPI panels in the same page, but we want the KPI panels to be splitted and add only 20 panels per page, so that the next panels will go to next page. and also we want to design this pages in such a way that they will be moving on to next pages one after the other with a sepecified time span, so that we can select the time span. Time span should be 10 seconds, 20 seconds, 30 seconds like that. The above shown is the kpi dsahboard we have designed. We request you to kindly help us on this.
Thank you for the assist. Much appreciated. I have sc_admin role to my user and previously I was able to create and deploy the apps using the Splunk Web UI. I think, I'll raise a Splunk Support tic... See more...
Thank you for the assist. Much appreciated. I have sc_admin role to my user and previously I was able to create and deploy the apps using the Splunk Web UI. I think, I'll raise a Splunk Support ticket to assess the issue and get the solution for it. Thanks once again.
Hi Team, We have recently started ingesting Apache access and request logs from an application, but the data parsing isn't working as expected. Could you please let me know the field names for thes... See more...
Hi Team, We have recently started ingesting Apache access and request logs from an application, but the data parsing isn't working as expected. Could you please let me know the field names for these events so I can try to extract them manually? Alternatively, do we have any format or add-on available that would enable automatic field extraction? If so, that would also be fine with me. For your information, our Splunk Search Head is hosted in the cloud and managed by Splunk Support. I have provided the log structure for both log sources for reference. Please help to check and update.   Request Logs: [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 ABCDE-FGH-IJK256-LMN-SHA123 "GET /share/page/ HTTP/1.1" xxxxx [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 xxxxx-xxx-xxx256-xxx-xxx123 "GET /share/page/ HTTP/1.1" - Access Logs: xx.yyy.zzz.aa - - [09/Aug/2024:07:57:00 +0000] "GET /share/page/ HTTP/1.1" 200 xxxxx aaa.bbb.ccc.dd - - [09/Aug/2024:07:56:53 +0000] "GET /share/page/ HTTP/1.1" 200 - Thank you.
Hi, Oki thanks we will try that.
If you have correct permissions to create app it should work after you have fulfilled all needed fields. If it isn't work you should create support ticket to Splunk. They can check what is the issue.
Hello Splunkers!! I am executing below script for backfilling the summary index against my saved search. The script is working fine for date 4th and 6th august but it is not working for 5th of Aug... See more...
Hello Splunkers!! I am executing below script for backfilling the summary index against my saved search. The script is working fine for date 4th and 6th august but it is not working for 5th of Aug. Please help me to suggest me some of the potential reasons why script is not working for 5th Aug, although I have data available to main index from where saved search push data to summary index.   Example of script which I executed so far for 5th Aug. splunk cmd python fill_summary_index.py -app customer -name si_summary_search -et 1693883300 -lt 1693969700 -j 8 -owner admin -auth admin:yuuuyyyxx As I am getting below Warning also for 4th 5th and 6th only.    
You should take panel depends on use based on those servers which you have selected. Something like ... <fieldset ...> <input type="multiselect" ... <change> <condition value="Server A"> ... See more...
You should take panel depends on use based on those servers which you have selected. Something like ... <fieldset ...> <input type="multiselect" ... <change> <condition value="Server A"> <set token="t_US_DS"> .... </fieldset> ... <panel depends="$t_US_DS$"> ..... </panel> I suppose that you will get the idea from above and you can get the details from visualization docs.splunk.com. As you probably have dynamic list of nodes you probably need to do this using lookups with eval in coditions?  
This could be storage performance issue or it could be that your environment is too small for your workload. As said, 1st you should check storage performance and if it’s enough then look from you MC... See more...
This could be storage performance issue or it could be that your environment is too small for your workload. As said, 1st you should check storage performance and if it’s enough then look from you MC (monitoring console) other resource usage etc.
You could create own SHC cluster for HF without any normal SH activity from end user. There is no need to use regular end user SHC, just the functionality is what is needed!
Hi, also running into this issue frequently. Has anyone worked on this or found a solution? Thanks!
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA fe... See more...
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA features, but not all the customers want to thave an input processor (like DB-Connect) in front end systems accessed by all the users. I hope that Splunk will design a solution for this as soon as possible, for the moment, there's a request in ideas.splunk.com : https://ideas.splunk.com/ideas/EID-I-85 that you could vote. There are two problems to solve for the HA: the checkpoint, the input enablement. A not automatic workaround is to install DB-Connect on at least two HF and create a scheduled script that makes a KV-Store backup from the main HF and a restore in the secondary one, in this way you align the checkpoint between the two HFs, obviously in case or Disaster you'll have duplicated data for the period from the last alignment. Then you have to manually start the secondary DB-Connect in case of Disaster and stop it when the Disaster period is closed. It's a porkaround not a solution! waiting for the solution from Splunk that's late from many years (vote for it!). Ciao. Giuseppe
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When usi... See more...
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When using the JDK runtime environment, we need to set the classpath using the  -classpath option for the application. After these setting. restart the JVM. Moreover, it is worth to check whether the user that currently running the JVM has access to read the jar file. Cause this error could be triggered when permission denied. Hope this helps, Martina
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provi... See more...
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provide guidance on how to retrieve these metrics and display them on a dashboard? Best regards, Nivedita Kumari
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performa... See more...
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performat disks you'll have more performat searches. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware Ciao. Giuseppe
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your sol... See more...
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your solution with append is subjected to the limit of 50,000 results because it's a subsearch. About the graph, you should be ableto plot a graph with your search, see in the Visualization tab or in a panel. Ciao. Giuseppe
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. ... See more...
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. Ciao. Giuseppe