All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, there is, but it's not considered a Best Practice. Define a TCP input to read the logs from the selected port.  See https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports
hi @gitingua  You could maybe try the solution mentioned in the comments here: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-change-the-width-of-panels-in-xml/m-p/289203 
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is... See more...
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is it just a straightforward install same version Splunk on new Ubuntu server, copy config over, check permissions, and start it up (same IP, same DNS)? For the IDX cluster, do a new CM first and copy config over or are there other things to consider? What's a good process for the indexers (only 3). Can I build new indexers on Ubuntu, add them to the cluster, and then remove the CentOS servers as new Ubuntu servers are added all the while letting clustering handle the data management?
@richgalloway  Is there any way to not have to use a separate syslog server? 
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk scor... See more...
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk score it's creating a double entry, please have a look in the image attached.
Best Practice is to not ingest device logs directly into Splunk.  Any time Splunk restarts data sent by the devices during the downtime will be lost.  The recommended approach is to send the logs (us... See more...
Best Practice is to not ingest device logs directly into Splunk.  Any time Splunk restarts data sent by the devices during the downtime will be lost.  The recommended approach is to send the logs (usually in syslog format) to a dedicated receiver (syslog-ng or Splunk Connect for Syslog (SC4S)) and forward them to Splunk from there.
Use wildcards for the unknown parts. [monitor://c:\users\*\appdata\local\app\*\app.log]
hi @jaibalaraman  try:  index=_internal source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024... See more...
hi @jaibalaraman  try:  index=_internal source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024/1024, 3)
Have you taken away filters one by one, starting from the last one?  This is the first step to diagnose.  One key question you need to answer is: Is the groupby field named "series" extracted in Splu... See more...
Have you taken away filters one by one, starting from the last one?  This is the first step to diagnose.  One key question you need to answer is: Is the groupby field named "series" extracted in Splunk?  A second question, of course, is whether the aggregated field "kb" extracted?
@kamlesh_vaghela  Sir , may I ask about add data to kv store .  I have try example from SA-devforall post entry as below function postNewEntry() { var record = { _time: (new Date).getT... See more...
@kamlesh_vaghela  Sir , may I ask about add data to kv store .  I have try example from SA-devforall post entry as below function postNewEntry() { var record = { _time: (new Date).getTime() / 1000, status: $("#status").val(), message: $("#message").val(), user: Splunk.util.getConfigValue("USERNAME") } $.ajax({ url: '/en-US/splunkd/__raw/servicesNS/nobody/SA-devforall/storage/collections/data/example_test', type: 'POST', contentType: "application/json", async: false, data: JSON.stringify(record), success: function(returneddata) { newkey = returneddata } }) }   and modify it to mine function postNewEntry(unique_id, status) { var record = { _time: (new Date).getTime() / 1000, status: $("#status").val(), unique_id: $("#unique_id").val(), } $.ajax({ url: ' https://10.1.1.1:8089/servicesNS/nobody/test/storage/collections/data/man_data/', type: 'POST', contentType: "application/json", async: false, data: JSON.stringify(record), success: function(returneddata) { newkey = returneddata } }) } always get , net::ERR_CERT_AUTHORITY_INVALID if I do it one linux curl , and get another error code by test another kv-store file [root@test-Splunk01 local]# curl -k -u admin:admin123 https://10.1.1.1:8089/serviceNS/nobody/test/storage/collections/data/splunk_man -H 'Content-Type: application/json' -d '{"status": "UnACK" , "unique_id" : "11305421231213"}' <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>Specified method is not allowed on this resource.</p></body></html> what can I do now ... btw ,  this is my transforms config .. [root@plunk01 local]# cat transforms.conf [man_data] fields_list = _key , unique_id , status [splunk_man] collection = splunk_man external_type = kvstore fields_list = _key, unique_id , status  thank u so much 
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but ... See more...
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but I suppose I'd be ok with "*", any user folder) and $randomnumber$ is a unique ID that's going to always be different for every desktop, possibly change over time, and possibly be more than one folder for a given user. How would I make the file monitor stanza in inputs.conf do that?   Thanks!
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. Howeve... See more...
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. However, I don't want over 60 days' of data. How do I build up to 50 days' of data in the model but then roll off anything over 60 days?   Thanks!
Solution worked for me
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. ... See more...
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. How can I exclude the modinput for one of the checkpoint on particular servers? An example windows log event is as following: \powershell.exe (CLI interpreter), Pid: 12345,\OSEvent: (Source: (Uid: xxxxxxxxx, Name: splunk-winevtlog.exe, Pid: 123123, Session Id: 0, Executable Target: Path: \Device\HarddiskVolume4\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\WinEventLog\Application) Any help would be appreciated! Thanks in Advance!
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrik... See more...
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrike logs, but I'm hoping someone here can give me some guidance to get started. 
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Jus... See more...
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Just one base query will fetch data from Splunk and then in Grafana I can write additional commands or functions which will be used in each panel on top of the base query, so Splunk load is reduced. Similar to “Post process search” in Splunk. Post Process Searching - How to Optimize Dashboards in Splunk (sp6.io) I followed below instructions and able to fetch data in Splunk but it causes heavy load and stops working next day and all the panels shows “No Data”. Splunk data source | Grafana Enterprise plugins documentation Your help will be greatly Appreciated! Thanks in Advance!
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | for... See more...
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024, 3)]      
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audi... See more...
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audit logs but I haven't seen an entry for it.     Any leads or tips would be appreciated.    Thank you
Hi @Nour.Alghamdi, I found some existing info that if all the keys match, it could be a cert error. Please refer to this article and see if this helps. https://community.appdynamics.com/t5/Knowled... See more...
Hi @Nour.Alghamdi, I found some existing info that if all the keys match, it could be a cert error. Please refer to this article and see if this helps. https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-troubleshoot-EUM-certificate-errors/ta-p/22383