All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Could you explain what's wrong with the original search?  What is expected and what is the actual results?  Importantly, what is the logic in your original search to meet your expectation? If I have... See more...
Could you explain what's wrong with the original search?  What is expected and what is the actual results?  Importantly, what is the logic in your original search to meet your expectation? If I have to read your mind based on the code snippet, you are saying that  the main search should give you searches that has NOT produced notables; (Question: Why you are searching for action.notable=1 not action.notable=0?) the subsearch should give you searches that has produced notables; (Note: Nobody in this forum except yourself knows what the dataset looks like.  So, always explain dataset and logic.) The difference between 1 and 2 would give you something? If I put down whether action.notable should be 1 or 0, i.e., assuming that  has_triggered_notables = "false" is the correct label for the main search, it should have zero overlap with the subsearch which you labeled as has_triggered_notables = "true".  This means an outer join should give you everything in the main search.  Is this what you see?  Why would you expect anything different?  Again, nobody in the forum except yourself has that answer. Maybe action.notable is not something to indicate whether a notable is produced?  Maybe this field doesn't even exist?  You used the phrase "status enabled" to describe your criteria.  But saved searches has no "enabled" or "not enabled" statuses.  Do you mean scheduled, as discernible from is_scheduled field, nothing to do with the nonexistent action.notable? If you ask an unanswerable question, no one is able to give you an answer.  And this one is full of hallmarks of unanswerable questions. Before I give up, let me make a final wild guess: By "enabled" you mean is_scheduled=1, there is nothing about action.notable, and that the subsearch actually does something as I speculated above (2).  In that case, this is a search you can try and tweak that doesn't involve an inefficient join. | rest /services/saved/searches | search title="*Rule" is_scheduled=1 NOT [search index=notable search_name="*Rule" orig_action_name=notable | stats values(search_name) as title] | fields title
Verify splunk has read access to the file.  Check splunkd.log for messages about reading the file.
I will try this out. Thanks!
Thanks! I just tried it - it doesn't SEEM to be working, I'm not getting any data in splunk even  though I know the files are being updated. Looking at the index (just searching index=someapp) retur... See more...
Thanks! I just tried it - it doesn't SEEM to be working, I'm not getting any data in splunk even  though I know the files are being updated. Looking at the index (just searching index=someapp) returns no data (index does exist). This is what I have: [monitor://c:\users\*\appdata\local\someapp\apps\*\app.log] index = someapp sourcetype=someapp disabled=0  
Yes I tried the outcome is blank  Question - do i need to select the time frame like last 7 days or 30 days   
Yes, there is, but it's not considered a Best Practice. Define a TCP input to read the logs from the selected port.  See https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports
hi @gitingua  You could maybe try the solution mentioned in the comments here: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-change-the-width-of-panels-in-xml/m-p/289203 
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is... See more...
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is it just a straightforward install same version Splunk on new Ubuntu server, copy config over, check permissions, and start it up (same IP, same DNS)? For the IDX cluster, do a new CM first and copy config over or are there other things to consider? What's a good process for the indexers (only 3). Can I build new indexers on Ubuntu, add them to the cluster, and then remove the CentOS servers as new Ubuntu servers are added all the while letting clustering handle the data management?
@richgalloway  Is there any way to not have to use a separate syslog server? 
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk scor... See more...
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk score it's creating a double entry, please have a look in the image attached.
Best Practice is to not ingest device logs directly into Splunk.  Any time Splunk restarts data sent by the devices during the downtime will be lost.  The recommended approach is to send the logs (us... See more...
Best Practice is to not ingest device logs directly into Splunk.  Any time Splunk restarts data sent by the devices during the downtime will be lost.  The recommended approach is to send the logs (usually in syslog format) to a dedicated receiver (syslog-ng or Splunk Connect for Syslog (SC4S)) and forward them to Splunk from there.
Use wildcards for the unknown parts. [monitor://c:\users\*\appdata\local\app\*\app.log]
hi @jaibalaraman  try:  index=_internal source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024... See more...
hi @jaibalaraman  try:  index=_internal source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024/1024, 3)
Have you taken away filters one by one, starting from the last one?  This is the first step to diagnose.  One key question you need to answer is: Is the groupby field named "series" extracted in Splu... See more...
Have you taken away filters one by one, starting from the last one?  This is the first step to diagnose.  One key question you need to answer is: Is the groupby field named "series" extracted in Splunk?  A second question, of course, is whether the aggregated field "kb" extracted?
@kamlesh_vaghela  Sir , may I ask about add data to kv store .  I have try example from SA-devforall post entry as below function postNewEntry() { var record = { _time: (new Date).getT... See more...
@kamlesh_vaghela  Sir , may I ask about add data to kv store .  I have try example from SA-devforall post entry as below function postNewEntry() { var record = { _time: (new Date).getTime() / 1000, status: $("#status").val(), message: $("#message").val(), user: Splunk.util.getConfigValue("USERNAME") } $.ajax({ url: '/en-US/splunkd/__raw/servicesNS/nobody/SA-devforall/storage/collections/data/example_test', type: 'POST', contentType: "application/json", async: false, data: JSON.stringify(record), success: function(returneddata) { newkey = returneddata } }) }   and modify it to mine function postNewEntry(unique_id, status) { var record = { _time: (new Date).getTime() / 1000, status: $("#status").val(), unique_id: $("#unique_id").val(), } $.ajax({ url: ' https://10.1.1.1:8089/servicesNS/nobody/test/storage/collections/data/man_data/', type: 'POST', contentType: "application/json", async: false, data: JSON.stringify(record), success: function(returneddata) { newkey = returneddata } }) } always get , net::ERR_CERT_AUTHORITY_INVALID if I do it one linux curl , and get another error code by test another kv-store file [root@test-Splunk01 local]# curl -k -u admin:admin123 https://10.1.1.1:8089/serviceNS/nobody/test/storage/collections/data/splunk_man -H 'Content-Type: application/json' -d '{"status": "UnACK" , "unique_id" : "11305421231213"}' <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>Specified method is not allowed on this resource.</p></body></html> what can I do now ... btw ,  this is my transforms config .. [root@plunk01 local]# cat transforms.conf [man_data] fields_list = _key , unique_id , status [splunk_man] collection = splunk_man external_type = kvstore fields_list = _key, unique_id , status  thank u so much 
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but ... See more...
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but I suppose I'd be ok with "*", any user folder) and $randomnumber$ is a unique ID that's going to always be different for every desktop, possibly change over time, and possibly be more than one folder for a given user. How would I make the file monitor stanza in inputs.conf do that?   Thanks!
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. Howeve... See more...
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. However, I don't want over 60 days' of data. How do I build up to 50 days' of data in the model but then roll off anything over 60 days?   Thanks!
Solution worked for me
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. ... See more...
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. How can I exclude the modinput for one of the checkpoint on particular servers? An example windows log event is as following: \powershell.exe (CLI interpreter), Pid: 12345,\OSEvent: (Source: (Uid: xxxxxxxxx, Name: splunk-winevtlog.exe, Pid: 123123, Session Id: 0, Executable Target: Path: \Device\HarddiskVolume4\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\WinEventLog\Application) Any help would be appreciated! Thanks in Advance!