All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @DilipKMondal , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @munang, at first, you can configure the retention you want for your Data Model, so if you want a longer retention time, you can configure it, you need only more storage: the requested storage fo... See more...
Hi @munang, at first, you can configure the retention you want for your Data Model, so if you want a longer retention time, you can configure it, you need only more storage: the requested storage for one year for data Models is around 3.4 times the average of daily indexed data. Then Accelerated data Models are usually used for the most searches that must be very fast, if you need to search in older data, you can also use your data in indexer or summary indexes. As I said, usually in the last 30 days there are the data of more than 85% of the searches, that you need they are faster. Ciao. Giuseppe
Hello Using Splunk db connect 3.13.0 which worked fine until I've restarted the server since then the task server is not starting and im getting this error : message from "/opt/splunk/etc/apps/s... See more...
Hello Using Splunk db connect 3.13.0 which worked fine until I've restarted the server since then the task server is not starting and im getting this error : message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ ERROR ExecProcessor [15275 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" action=dbxquery_server_start_failed error=java.security.GeneralSecurityException: Only salted password is supported  
Hi @splunky_diamond , the normal activity flow for a SOC Analyst is the following: there a defined monitoring perimeter and some Correlation Searches that monitor the above perimeter to find some ... See more...
Hi @splunky_diamond , the normal activity flow for a SOC Analyst is the following: there a defined monitoring perimeter and some Correlation Searches that monitor the above perimeter to find some possible threat, if one or more CS thigger an alert, it creates a Notable, I think that an eMail notification can be useful only for night monitoring because during the day, the SOC Analysts should be aways connected to ES, when a Notable is triggered (a Notable is one or more events that match a condition to check, not a securty indicent!), a SOC Analyst takes in care the Notable ana ivestigate using the investigation panels and eventually its own searches, He/she could also use the other ES dashboards, even if I never saw this! based on the investigation the SOC Analyst defines if: it's a real security incident, it's a false positive, the Notable requires an escalation for adeeper check, if it's a false positive THE SOC Analist closes the case ,eventualy adding a suppression rule, if the Notable requires an escalation check, the SOC Analyst passes to case following the indication of the related playbook, if it's a real security indicent, the SOC Analyst apply the predefined playbook actions or passes the activity to the colleagues enabled to intervene. This is a general fow, and it depends on the internal processes of the SOC. Only one additional information: if (as usual) in your SOC there are few SOC Analysts, it could be a good idea, doesn't associate to a Correlation Search a Notable but a Risk Score addition; in this way the SOC Analyst is informed in delay of a threat, but the SOC has to manage less Notables, in other words, if there are three SOC Analysts and the SOC receive 10,000 Notables/day they cannot check a of them. Ciao. Giuseppe
Hello, Splunkers! I hope there are some SOC analysts around who are using Splunk Enterprise and Splunk ES in their work. I've been learning Splunk for the past month and I have worked with Splunk ES... See more...
Hello, Splunkers! I hope there are some SOC analysts around who are using Splunk Enterprise and Splunk ES in their work. I've been learning Splunk for the past month and I have worked with Splunk ES a bit and tried configuring some correlation searches with automated notable generation along with email notification alerts. I now have to present some cases in my test lab, where I have an attacker who performs some malicious activity that triggers some of the correlation searches that I have configured, and then I need to demonstrate the full investigation process from SOC analyst's POV.  The problem is, I have almost 0 knowledge of how SOC operates and if they were to use Splunk Enterprise and Enterprise Security app, what would they do exactly? Would they just go over all the new notables and look at the drill-down searches trying to understand what notables are related to other notables? Would they try to correlate the events by time? Would they only work around Splunk ES, or would they also go to the dashboards and search for some data there?  I would appreciate it if someone could explain how SOC works with Splunk ES in case of some simple, uncomplicated attacks, that trigger 2-3 correlation searches max.   Also small question, since I have the email notifications configured, who is usually the one receiving the email notifications about triggered correlation searches, is it a SOC director, or analyst, or someone else? Please let me know if more information is required, I would love to provide as many details as needed, as long as I get the best answer that would help me. Thanks in advance for taking the time to read and reply to my post!
Thank you very much! That pretty much explains everything!  
Hello. I'm a Splunk newbie. There is confusion about setting up data model acceleration. According to the official documentation, if the data in your data model is out of date, Splunk will continuo... See more...
Hello. I'm a Splunk newbie. There is confusion about setting up data model acceleration. According to the official documentation, if the data in your data model is out of date, Splunk will continuously delete it and keep the data in your data model up to date. So, for example, if you summarize a month's data model in 0 12 * * *cycles, 1. -30 to 0 days data summarized 2. Day after day 3. Data from day -29 to +1 is summarized. 4. -30 days data is deleted Is this process correct? If this process is correct, why is it being done this way? And, information summarized through data model acceleration Is there a way to keep them consecutively like a summary index without them being deleted?
Hello tshah-splunk, Increasing the max_upload_size in web.conf worked in my case. Gave you a Karma point. Thanks 
The First Law of asking an answerable question states: Present your dataset (anonymize as needed), illustrate desired output from illustrated dataset, explain the logic between illustrated dataset a... See more...
The First Law of asking an answerable question states: Present your dataset (anonymize as needed), illustrate desired output from illustrated dataset, explain the logic between illustrated dataset and desired output. (Without SPL.) If attempted SPL does not give desired output, also illustrate actual output (anonymize as needed), then explain its difference from desired results if it is not painfully clear. I am able to pull my AD users account information successfully except for their email addresses.  Can you explain from which source are you pulling AD info?  Your SPL only uses a lookup file.  Do you mean lookup table AD_Obj_User contains email addresses but the illustrated SPL does not output them, or your effort to populate AD_Obj_User fails to obtain email addresses from a legitimate AD source (as @deepakc speculated)? If former, what is the purpose of the SPL?  What is the content of AD_Obj_User?  What is the desired output and the logic between the content and desired output? If latter, what is the purpose of showing SPL?
It’s mainly around performance, time to value, and using all the ES feature, you could be a large enterprise ingesting loads of data sources, and a big SOC operation, and you might want to run many m... See more...
It’s mainly around performance, time to value, and using all the ES feature, you could be a large enterprise ingesting loads of data sources, and a big SOC operation, and you might want to run many many different correlations rules, this would not be practical on raw data, and it would take a long time to develop new rules when so many come out of the box. So, this is where DM's come into play, faster and better all round.     For you it sounds like you have just a few use cases and can run your own rules on raw data, and if your happy with that, then that’s fine. But you’re not then exploiting what ES has to offer and all the use cases build around data models. 
Hi @deepakc  - Good Morning. Thank you, this is really helpful. You have a great day! Best Regards, Dilip
Hi @gcusello  - Good Morning. Thank you for the wonderful help and guidance. I can now able to proceed with this. I highly appreciate your help. You have a great day! Best Regards, Dilip K Mondal
You might be able to change the layout type in the code. However, the coordinates and other configuration parameters need to be adjusted based on the dimensions. Make a clone/copy of the dashboard t... See more...
You might be able to change the layout type in the code. However, the coordinates and other configuration parameters need to be adjusted based on the dimensions. Make a clone/copy of the dashboard to try so that original dashboards are not affected   "layout": { "type": "grid", "options": { "width": 1440, "height": 960 },   Having said that, absolute layout type gives you lot of flexibility . https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/Layouts In your case, why cant you add all the tables into the canvas? The size of the canvas can be changes as you might have already explored. If you could share the dashboard code, probably we get a better idea about the actual situation.  
It  could be a permissions issue you need read the email address attribute ((&(objectClass=user)(objectCategory=person)(mail=*))) check the user permissions that is being used to pull the LDAP data... See more...
It  could be a permissions issue you need read the email address attribute ((&(objectClass=user)(objectCategory=person)(mail=*))) check the user permissions that is being used to pull the LDAP data, see your AD admin. Or run something like the below to check under that user account.   dsquery user -samid username | dsget user -email If not, find out how it’s being populated, normally its done via the ldap search command see references below. Check the ldap search that creates the lookup and you should have the data there, this may have been created already as a secluded search. Reference: Ldap Search using the command https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/Theldapsearchcommand Ldap Add-on https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/AbouttheSplunkSupportingAdd-onforActiveDirectory 
Check your csv file, it might be to do with the csv formatting, create a simple test csv file, with a few headers and data and see if that goes through. If it does you can then check your csv and ens... See more...
Check your csv file, it might be to do with the csv formatting, create a simple test csv file, with a few headers and data and see if that goes through. If it does you can then check your csv and ensure its correctly formatted. 
Hello @deepakc , thanks for your post. As I mentioned in my post, I knew about the data acceleration and ability to run the searches across multiple sources. Undoubtedly, these are the main advant... See more...
Hello @deepakc , thanks for your post. As I mentioned in my post, I knew about the data acceleration and ability to run the searches across multiple sources. Undoubtedly, these are the main advantages of using data models. However, regarding the usage of data models in Splunk ES, I have a custom correlation search that is running without the usage of data models, and it works perfectly fine, which leaves the question about the need of usage of data models in correlation searches in ES still open.  
Normal searches run on the Raw Data and Datamodels are a populated dataset based of the Raw data target fields, hence the Data models are faster.   Data models normalize and standardise data across ... See more...
Normal searches run on the Raw Data and Datamodels are a populated dataset based of the Raw data target fields, hence the Data models are faster.   Data models normalize and standardise data across multiple indexes, allowing users to analyse data consistently regardless of the source. They include accelerations and summaries, such as data summaries, these accelerations speed up searches and make analysis faster and more efficient, especially for large datasets. Overall, using data models in Splunk enhances data analysis capabilities, improves performance, and simplifies the process of exploring and understanding data. ES is based on Datamodels, so you index you security data sources like Firewall/IDS,Network/Auth in the normal way, you then ensure you install the CIM (Common Information Model) Compliant TA's for those data sources, and after you tune the Datamodels for searching your target indexes and it will search and populate the Datamodels based on the types of data sources. Once all in and configured ES lights up, and you can deploy various Correlation Rules which mostly run on the datamodels. (Simple explanation)  Example: You want to ensure your Network Firewall Traffic Data Model is available for data for ES, you then ingest Cisco ASA data into your normal index, you then ensure you install the Cisco ASA TA from Splunkbase, you then tune CIM for this data so it searches it and populates Network_Traffic data model on a regular basis and from there you can run various Rules or create your own, using tstats etc.     See the below for the various data models and various normalised fields https://docs.splunk.com/Documentation/CIM/5.0.2/User/Howtousethesereferencetables
Hello splunkers! I have a simple question regarding Splunk data models and regular searches, I have found some answers, but I would like to dig deeper.  What's the advantage of using the data mod... See more...
Hello splunkers! I have a simple question regarding Splunk data models and regular searches, I have found some answers, but I would like to dig deeper.  What's the advantage of using the data models? Like, why would we want to use the data models instead of regular searches where we just label the indexes in which we want to search for the data?  I know so far that the data models allow searching through multiple sources (network devices and workstations) by having the standardized fields. I also know about the data accelaration, that we can use tstats in our searches on accelerated data models in order to speed up the searches. Is there a particular scenario where we must use data models and not using them will not work? (I am using Enterprise Security as well, so if there is any scenario that involves this app, it is most welcome) I would really appreciate a well-detailed answer. Thank you for taking time reading and replying to my post
Anecdotal but I found a few other log shoveling vendors appeared to have similar issues with the Forwarded log and Server 2022. Agent crashing/restarting constantly, but they seem to have patched the... See more...
Anecdotal but I found a few other log shoveling vendors appeared to have similar issues with the Forwarded log and Server 2022. Agent crashing/restarting constantly, but they seem to have patched their problems already. Fix Windows eventchannel forwarded events by nbertoldo · Pull Request #20594 · wazuh/wazuh (github.com) [Winlogbeat] Repeated warnings · Issue #36020 · elastic/beats (github.com) Interesting at least.
I don't really have a solution.  I was going to suggest multiple white lists, but you said that didn't work for you. Also, you want to filter on AccountName and ObjectName, but those fields are not ... See more...
I don't really have a solution.  I was going to suggest multiple white lists, but you said that didn't work for you. Also, you want to filter on AccountName and ObjectName, but those fields are not supported by whitelist/blacklist.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats for the list of supported fields. Consider ingesting the Windows events in XML format and filtering them using the $XmlRegex key.  See https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorWindowseventlogdata#Use_allow_lists_and_deny_lists_to_filter_on_XML-based_events for more information.