All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What is not displaying correctly - what is different between the two tables?  
When we use a below query, in dashboard panel data is not showing correctly, if  we open the panel query in "open in search data is showing correctly. How to fix this issue?? index=dam-idx (... See more...
When we use a below query, in dashboard panel data is not showing correctly, if  we open the panel query in "open in search data is showing correctly. How to fix this issue?? index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA"
A private lookup created in App A can ONLY be seen in app A, so if you try to create the lookup definition in app B, then it will not show the CSV in the dropdown  If your lookup is listed like ... See more...
A private lookup created in App A can ONLY be seen in app A, so if you try to create the lookup definition in app B, then it will not show the CSV in the dropdown  If your lookup is listed like the above, your  username (red) and app (blue), then I believe it should be possible to create a definition in the same app for a private lookup - so if you cannot see your lookup in the dropdown, it may be a result of permissions - I am not sure, but if you can change your lookup permissions to app, you could see if that changes it.
Hii Bowesmana, Thanks for your reply I created the look up table file by uploading the csv file.  and iam looking in the same app as the one that i created the look up table. I am actually suppose... See more...
Hii Bowesmana, Thanks for your reply I created the look up table file by uploading the csv file.  and iam looking in the same app as the one that i created the look up table. I am actually supposed to get the data from production splunk. so i have very limited access. The look up table file i created has private access which is visible only to me . would that be an issue ?  
Is the logic that IFF there is a previous message=executed for ID X, then if state=completed, message should then be changed to 'executed' or should it always be executed if state=completed? | eval ... See more...
Is the logic that IFF there is a previous message=executed for ID X, then if state=completed, message should then be changed to 'executed' or should it always be executed if state=completed? | eval message=if(state="completed", "executed", message) will just change message toexecuted if state is completed. If you ONLY want to change completed to executed if there is a previous "started", then it is important to understand your data a bit better, as ordering becomes significant - you have started completed pending  for ID 101 - so I am guessing that those are not in the order of occurrence. You would look at using streamstats, stats, eventstats or transaction to solve this - but can you give more about your existing search an data
How did you create the lookup by uploading a CSV using  the lookup editor using outputlookup which app did you create the lookup in and what app are you in when trying to make the lookup defini... See more...
How did you create the lookup by uploading a CSV using  the lookup editor using outputlookup which app did you create the lookup in and what app are you in when trying to make the lookup definition. If yo go to the list of lookup files (Lookups->Lookup table files) can you see the lookup there and what are its permissions - make sure you look for all lookups visible in all apps - and check what app your lookup file is in  
Hello, I have created a splunk look up table file( file is in csv format )and now Iam trying to create a look up definition.  But i couldn't create lookup definition because when i tried searching ... See more...
Hello, I have created a splunk look up table file( file is in csv format )and now Iam trying to create a look up definition.  But i couldn't create lookup definition because when i tried searching for the look up file , i couldn't get that file in my drop down menu to select. what could be the reason. can anyone help with this    Thanks in advance
Hi, we could see message ="executed" for started state field. so, would like to replace with same massage where state="completed"  event too for same ID's. I hope I word this out clearly. Th... See more...
Hi, we could see message ="executed" for started state field. so, would like to replace with same massage where state="completed"  event too for same ID's. I hope I word this out clearly. Thank you in advance.
As you don't have admin access, you have some options: 1. Create the transforms.conf / collections config using a file editor if you know what your doing and give it your Splunk admin they can do th... See more...
As you don't have admin access, you have some options: 1. Create the transforms.conf / collections config using a file editor if you know what your doing and give it your Splunk admin they can do the rest. 2. You can download a free instance of Splunk (Install it if you know what your doing)  and do the dev work there and then give the config to your Splunk admin. 3. You can also use the lookup editor app - https://splunkbase.splunk.com/app/1724  this is an easy way to create kvstores - you need to install this app and its popular, get you Splunk admin to install this.
@splunky_diamond  your welcome  Here's 's some more security tips to help you discovery some more. 1. Many Security people use this app to help them with there Security Use cases, I use it myse... See more...
@splunky_diamond  your welcome  Here's 's some more security tips to help you discovery some more. 1. Many Security people use this app to help them with there Security Use cases, I use it myself - so many good features, it can also make use case recommendations based on on your data sources. https://splunkbase.splunk.com/app/3435  2. ESCU - Provides regular Security Content updates to help security SOC / analysts to address ongoing time-sensitive threats, attack methods, and other security issues. https://splunkbase.splunk.com/app/3449  3. Here you will find so many use cases for reference - great place to baseline your security monitoring strategy. https://research.splunk.com/ 
Hi all, First post in SPLUNK and I'm not even going to pretend I know the in's and out's of everything that I am currently trying to achieve so I apologise if this is an easy answer... I have c... See more...
Hi all, First post in SPLUNK and I'm not even going to pretend I know the in's and out's of everything that I am currently trying to achieve so I apologise if this is an easy answer... I have created a dashboard that contains an HTML form and through JS magic it does everything I need it to, which includes a 'submit' button that is connected to an HTML table in a different panel. When the button is clicked the table is updated with the relevant information - Happy days. Under the HTML table, I have another button that when clicked I want it to create a new dashboard that displays that table (there is more to it but for now I just need it to create a new dashboard).  After a bit of research, I stumbled across AJAX but I'm constantly receiving a 404 error. I understand that a 404 is resource not found, but every document I find indicates that this is the correct resource. My SPLUNK Enterprise version is currently running on my Laptop (127.0.0.1:8000) but I am at a frustrating loss now...     document.getElementById('confirmButton').addEventListener('click', function() { var dashboardData = { name: 'newDash', 'eai:data': '<dashboard><label>$name$</label><description>$goal$</description><row><panel><html><h1>something</h1></html></panel></row></dashboard>', }; $.ajax({ url: '/serviceNS/nobody/search/data/ui/views', type: 'POST', data: dashboardData, success: function(response) { console.log('Success:', response); }, error: function(jqXHR, textStatus, errorThrown) { console.error('Error:', textStatus, errorThrown); } }); });     The issue seems to indicate the url section is wrong, but if anyone could help point me in the right direction, I would greatly appreciate it. Kind Regards, oO0NeoN0Oo   
Hello, I am not an admin that has permission to create or view transform.conf file. I also don't have a lab, so I can't experiment with the KVStore lookup. Can I create KVStore lookup definition ... See more...
Hello, I am not an admin that has permission to create or view transform.conf file. I also don't have a lab, so I can't experiment with the KVStore lookup. Can I create KVStore lookup definition in Splunk UI without using transform.conf file? Will creating KVStore lookup definition in Splunk UI automatically update transform.conf file? Please suggest. Thank you
facing same issue, any solution?
No! Don't try to handle structured data with simple regexes. Unless you're very very very sure that the format is constant and it always will be (which is typically not something you can rely on sinc... See more...
No! Don't try to handle structured data with simple regexes. Unless you're very very very sure that the format is constant and it always will be (which is typically not something you can rely on since even the developers writing the solutions that produce such events don't know the exact order of fields that will be sent by their program) handling json or XML with regex is asking for trouble.
Yup. If the threat actor has control over the machine, it could - for example - completely delete the Splunk forwarder from the computer so you cannot be sure of anything after such situation happene... See more...
Yup. If the threat actor has control over the machine, it could - for example - completely delete the Splunk forwarder from the computer so you cannot be sure of anything after such situation happened (I've seen very sensitive setups where events - not necessarily Windows Event Logs but the general idea is the same - were printed out onto a printer as a non-modifiable medium so that they couldn't be in any way changed after they had been created). For normal situations where you expect a network downtime from time to time (like sites with unstable network connections, mobile appliances and so on), you can tweak your forwarder's buffer sizes so that it can hold the data back for the needed period of time and then send the queued data when it regains downstream connectivity. Be aware thought that such setup will create a host of potential problems resulting from the significant lag between the time the event is produced and the time it's being indexed. They can be handled but it needs some preparation and tweaking some limits.
As @richgalloway already pointed out - the format is wrong. You need the key=regex format. And you need to split it into separate whitelist entries (each entry can have multiple key=regex parameters)... See more...
As @richgalloway already pointed out - the format is wrong. You need the key=regex format. And you need to split it into separate whitelist entries (each entry can have multiple key=regex parameters). The trick here is that Account Name is not a field within the event but a field in the Message field of the event. So you need to match it as a regex within the Message field. So you'd effectively end up with something like whitelist1 = EventCode=%(4624|4634|4625)% Message=%Account Name:.*\.adm% whitelist2 = EventCode=%(4659|4663|5145)% Message=%Object Name:.*Test_share%
It's also worth noting that typically SOC (similarily to NOC, support and similar groups) is organized hierarchically. Regardless of the actual tool used (there might be ES or any other SIEM, there ... See more...
It's also worth noting that typically SOC (similarily to NOC, support and similar groups) is organized hierarchically. Regardless of the actual tool used (there might be ES or any other SIEM, there could be a SOAR tool in place to simplify the process or automate some of its steps), the 1st line operator's task is to check the actual alert (it might be a Notable in ES, it might be an asset exceeding risk score threshold, it might be anything that is defined procedurally for a given SOC) and then verify it (typically according to a predefined playbook), react to it if it's something "standard" and reaction procedure is known and possibly pass the further processing to the 2nd line if the situation cannot be handled within the playbook defined parameters. 1st line operator will typically use some set of predefined dashboards if using ES or might be just having some playbooks defined in SOAR solution and not have to touch SIEM solution at all. 2nd line analyst has usually more knowledge about the company environment and access to more tools. Since it's this analyst's task to get more insight into a situation when there is an alert which cannot be handled in a predefined way, this analyst will typically use ES and Splunk in general along with other tools to find more information about context of the possible threat. Most cases end at 2nd line analysts. If everything else fails 3rd line experts are called in (many smaller companies due to cost reason's don't even employ 3rd line analysts in-house but rather have some amount of man-hours purchased as a subscription service from an external service provider) and they will utilize everything at their disposal, including of course digging through the data in Splunk as well as reaching out to the solutions that generated those events and wil generally try to do whatever humanly possible to either stop the threat or - if the attack has already succeeded - limit its aftermath and restore the environment to normal state. In other words - the more advanced work you do in the incident handling process, the more you'll probably be dealing with ES and Splunk in general.
The overall idea is more or less correct but the details are a bit more complicated than that. 1. The summary-building search is spawned according to schedule and builds the summary data similarily ... See more...
The overall idea is more or less correct but the details are a bit more complicated than that. 1. The summary-building search is spawned according to schedule and builds the summary data similarily to writing indexed fields when ingesting data (in fact accelerated summaries are stored the same way it .tsidx files as indexed fields are). 2. The accelerated summaries are stored in buckets, corresponding with buckets of raw data. 3. The old buckets are not removed by the summary-building process but - as far as I remember - by the housekeeper thread (the same that is responsible for rolling event buckets). So it's not that straightforward FIFO process. Also the summary range is not a 100% precise setting. Due to data being stored in buckets and managed as whole buckets you might still have some parts of your summaries exceeding the defined summary range. Another thing worth noting (because I've seen such questions already) - no, you cannot have longer acceleration range than event data retention. When the event bucket is rolled to frozen, the corresponding datamodel summary bucket is deleted as well.
Hi, @gcusello  Thank you very much for your reply. However, there is something I am still confused about.   1. Exact meaning of data retention period For example, if you set the data retention p... See more...
Hi, @gcusello  Thank you very much for your reply. However, there is something I am still confused about.   1. Exact meaning of data retention period For example, if you set the data retention period to 1 year, Does initial acceleration mean that the summarized data will be kept for 1 year?   2. Meaning of data summary scope Assuming that one month's data is set as the summary range and the cron expression is set to */5 * * * *, If one month's worth of data is summarized every 5 minutes, the latest data continues to be summarized every 5 minutes. If it becomes past data, will it be deleted? I would appreciate your reply. Thank you
Thank you   And how do you read properties from it? I was looking in the documents you have attached and could not find reference to self.service.confs object...  could you please attach an exa... See more...
Thank you   And how do you read properties from it? I was looking in the documents you have attached and could not find reference to self.service.confs object...  could you please attach an example of how to read a specific prop from a specific stanza?   Than you