All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One comment more. If you are already indexing that amount of data with so few indexers I’m really surprised that you have ingestion based license! Especially when you normal amount is “small” but tim... See more...
One comment more. If you are already indexing that amount of data with so few indexers I’m really surprised that you have ingestion based license! Especially when you normal amount is “small” but time by time DDoS can double those, I propose that you should ask CPU based (svc in cloud, some other name in onprem) licensing model.  Anyhow as other said you must rearchitect your environment and add nodes and disk base based on your average daily usage and needed retention time and queries needed to run. For that you need someone local person to discuss your scenarios and needs. 
Other already answered to you, but there is one app https://splunkbase.splunk.com/app/7300 which could help you to find something else which you want get rid of.
It’s exactly this way. Data model just describe some data set and what it could have. Usually it doesn’t require that all those attributes are present.  Then those other things are something what yo... See more...
It’s exactly this way. Data model just describe some data set and what it could have. Usually it doesn’t require that all those attributes are present.  Then those other things are something what you can achieve easier by using data model, but definitely those aren’t part of data model requirements/definition.  Dashboards are knowledge objects which helps one to present data always the same way without to write SPL again and again to get needed results. Quite often dashboards have some interactions how user can change its behavior. Then we those as forms.  One can use DMs in dashboards inside SPL or use pivots to create reports or dashboards directly from DM.
On thing to consider, are your docker instances short or long lived? If those are short then it could be easier to manage those to adding needed splunk apps into image itself. But when those are long ... See more...
On thing to consider, are your docker instances short or long lived? If those are short then it could be easier to manage those to adding needed splunk apps into image itself. But when those are long lived then you should use DS or other automated way to manage those apps. As @PickleRick already said running DS on in docker is just adding those apps into deployment-apps folder and then serverclass as a separate app. Install and apply those should be automated if possible.
Have you look that installation log file? Usually there is reason why installing splunk has failed!
Thank you for the quick and clear response. I'll try to activate the DS.
Hi @shraddha09 , as also @livehybrid and @richgalloway said, it's really difficoult to help you without any information. The only additional information that I can add is that you cannot use an eva... See more...
Hi @shraddha09 , as also @livehybrid and @richgalloway said, it's really difficoult to help you without any information. The only additional information that I can add is that you cannot use an eval condition in a where command: you must use the eval command and then in a different row the where condition. Ciao. Giuseppe
Hello,          I am sorry I am not providing a solution but just asking if you were able to achieve it? I have the same requirement and struggling to achieve it.    Thanks, Sumit Rai
Thanks for the response. The issue has been resolved by creating a new configuration file and moving the configurations there. Syslog-ng was not letting me modify the default conf file.
Well... no. That's a common misconception about data models. They do not _do_ anything in general. They are an abstract definition that your data should conform to. They might provide some search-ti... See more...
Well... no. That's a common misconception about data models. They do not _do_ anything in general. They are an abstract definition that your data should conform to. They might provide some search-time calculated fields but nothing regarding data models works "before data is being indexed". And generally datamodels do not "enrich" data as such. It's the other way around - you sometimes need to enrich your data (for example create lookups mapping the actual values you have in your events to the values the data model expects) to make your data compliant with the data model. Finally, datamodels as such do not accelerate anything. Yes, if you have a data model, you can enable datamodel acceleration which periodically creates and update summaries based on the datamodel definition but it's not the functionality of the data model itself but rather additional mechanics built on top of the data model. (original post fixed).  
| append ```query for event brigdel``` [ search index="aws_np" [| makeresults | eval earliest=strptime("12/03/2025","%d/%m/%Y") | eval latest=relative_time(earliest,"+1d") | table e... See more...
| append ```query for event brigdel``` [ search index="aws_np" [| makeresults | eval earliest=strptime("12/03/2025","%d/%m/%Y") | eval latest=relative_time(earliest,"+1d") | table earliest latest]
hi @ITWhisperer  , based on date provided in main search lets say 12/mar/2025 1pm -  12/mar/2025 1:30 PM , I wan to use 12/mar/2025 only date in second one 
Thank you for responding to my post! I've completed all the below recommendations to no avail. Within the registry, there were no Splunk folders. I also deleted the folders from the C Drive as well.... See more...
Thank you for responding to my post! I've completed all the below recommendations to no avail. Within the registry, there were no Splunk folders. I also deleted the folders from the C Drive as well. I'm not sure what to do at this point.  If you have anymore tips and tricks, please let me know. 
Thank you for responding. I tried that method of installing but still doesn't install. With the "/quiet" install, the command line freeze for a bit then a minute later, it still doesn't install. I ca... See more...
Thank you for responding. I tried that method of installing but still doesn't install. With the "/quiet" install, the command line freeze for a bit then a minute later, it still doesn't install. I cannot find out why its completing its rolling back actions.  If you have anymore tips and tricks, please let me know. 
Hi @tech_g706  Unfortunately I dont think you're going to get the best response here as not many users in this forum will have specific syslog-ng experience. If it helps, I would start with checkin... See more...
Hi @tech_g706  Unfortunately I dont think you're going to get the best response here as not many users in this forum will have specific syslog-ng experience. If it helps, I would start with checking the logs, try the following: journalctl -xeu syslog-ng  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @okumar1, The two are completely different things, but to quickly break down the difference. Data Models is a definition of a data structure - They can be used to manipulate _raw data into a com... See more...
Hi @okumar1, The two are completely different things, but to quickly break down the difference. Data Models is a definition of a data structure - They can be used to manipulate _raw data into a common format of fields (See Common Information Models for more info!) at search-time by; Extracting fields from raw data Rename/transform/calculate fields. Data models can be accelerated which builds data summaries behind the scenes for faster data retrieval. This provides improved search performance, Improves data quality and consistency. Splunk Dashboards Visualise and analyse data in a user-friendly interface using charts/graphs/tables and custom visualisations. Dashboard inputs/tokens allow for interaction and filtering of data displayed to help provide real-time insights and trends for data-driven decision making. Dashboards allow different views on the same dataset for different stakeholders and users. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello @PickleRick  , Thanks for pointing out issues , I will check my query and see how can I optimize it . Logging is  not perfect that why I have to take this route . I will check  and see how can ... See more...
Hello @PickleRick  , Thanks for pointing out issues , I will check my query and see how can I optimize it . Logging is  not perfect that why I have to take this route . I will check  and see how can I make it better . Also I am looking for one guid . That part is commented. I am looking for whole set 
This is actually syslog-ng's internal problem and has nothing to do with Splunk. Check system logs, check the syslog-ng configuration (I'm not a syslog-ng expert but I think it had an option to valid... See more...
This is actually syslog-ng's internal problem and has nothing to do with Splunk. Check system logs, check the syslog-ng configuration (I'm not a syslog-ng expert but I think it had an option to validate your configuration).
Ok. Let me offer you some additional pointers here. 1. Whenever I see a dedup command I raise my eyebrows questioningly - are you sure you know how dedup works and is it really what you want? 2. Yo... See more...
Ok. Let me offer you some additional pointers here. 1. Whenever I see a dedup command I raise my eyebrows questioningly - are you sure you know how dedup works and is it really what you want? 2. Your subsearch is highly suboptimal considering you're just looking for a single - relatively unique value of the guid. As it is now, you're plowing through all data for given time range, extracting some fields (which you will not use later) with regex and finally only catching a small subset of those initial events. An example from my home lab environment. If I search index=mail | rex "R=(?<r>\S+)" | where r="1u0tIb-000000005e9-07kx" over all-time Splunk has to throw the regex at almost 11 millions of events and it takes 197 seconds. If I narrow the search at the very beginning and do index=mail 1u0tIb-000000005e9-07kx | rex "R=(?<r>\S+)" | where r="1u0tIb-000000005e9-07kx" The search takes just half a second and scans only 8 events. Actually, if you had your extractions configured for your events properly, you could just do the search like index="aws_np" aws_source="MDM" type="Contact" and it would work. You apparently don't have your data onboarded properly so you have to do it like in your search but this is ineffective. The same applies to the initial search where you do a lot of heavy lifting before hitting the where command. By moving the raw  "200" and "Create" strings to the initial search you may save yourself a lot of time. 3. To add insult to injury - your appended search is prone to subsearch limits so it might get silently finalized and you will get wrong/incomplete results without even knowing it. 4. You are doing several separate runs of the spath command which is relatively heavy. I'm not sure here but I'd hazard a guess that one "big" spath and filtering fields immediately afterwards in order to not drag them along and limit memory usage might be better performancewise. 5. You're statsing only three fields - request_time, output_time and messageGUID. Why extract the text field?
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng ... See more...
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng is running as root and log file directory owned by splunk user. Job for syslog-ng.service failed because the control process exited with error code. and systemctl status syslog-ng.service × syslog-ng.service - System Logger Daemon Loaded: loaded (/usr/lib/systemd/system/syslog-ng.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Sat 2025-04-05 11:39:04 UTC; 9s ago Docs: man:syslog-ng(8) Process: 1800 ExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS (code=exited, status=1/FAILURE) Main PID: 1800 (code=exited, status=1/FAILURE) Status: "Starting up... (Sat Apr 5 11:39:04 2025" CPU: 4ms Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Scheduled restart job, restart counter is at 5. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Start request repeated too quickly. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Failed with result 'exit-code'. Apr 05 11:39:04 if2 systemd[1]: Failed to start syslog-ng.service - System Logger Daemon.