All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You have a few options and each have their own pros and cons and without knowing the data, I can only make an estimated guess on what would work best for you. Data Acceleration - you could put you... See more...
You have a few options and each have their own pros and cons and without knowing the data, I can only make an estimated guess on what would work best for you. Data Acceleration - you could put your data into data models - either existing or custom ones that fit your data and accelerate the data.  This will "accelerate" your data which in theory should significantly boost the speed at which you search,  Mileage may vary, but often you get orders of magnitude faster searching.  The cons with this is that you are probably going to double the size of your "indexed data" because acceleration is keeping your non accelerated logs and putting a set of accelerated data on the index meaning that you will be using more storage space.  Additionally, every 5 minutes or so, your accelerated data will be running the search to accelerate your data and that it going to occupy ram and cpu permanently on the box.  Plus depending on your comfort building or fitting your data into a datamodel, this is a little labor intensive to set up the first time.  As for RBAC, Splunk will maintain the same rbac rules to your accelerated data as exists on the index, so you won't need any special rbac considerations.  Summary indexing - This is an amazing tool for doing exactly that, summarizing the data.  For example if you have network logs - you have probably seen that in a given time period when two machines talk to each other, you may find that you have 100s of "connection logs".  If your use case is not interested in each of those 100 logs, but is more interested in - did these two IPs talk - (think threat inteligence - did we go to bad site x) than you could create a single summary log that says IP address x talked to IP address y 100 times.  You write this data to a summary index.  In reality, summary data gets its speed advantages because instead of speeding up the way you look for a needle in the haystack, you shrink the haystack so it is smaller - like in my example it is 1 / 100 smaller than the original index.   This is a useful solution if a summary of the logs is good enough for what your analysts are looking for and that may or may not be the case.  In the world of threat intel, we often have to look back at network traffic 18 months.  We look at the summary data, if we have a hit, the summary data tells us what date the hit was on, but the analysts may have to go look at the unsummarized log for that day to get a better idea of what really happened because summary logs gain their power by being exactly that - a summary.   For RBAC purposes, you can just make your summary index reside on the same index that it was created for.  The term summary index implies that you have a special index, but that is not really the case.  A summary index can be written to any index it is just a new source and the sourcetype is stash.  So if you summarize your data to the same index that the original logs came from, they will have the same rbac rules on them.   Here is a video on how to summarize data  https://youtu.be/mNAAZ3XGSng Below is a simple spl concept of summarizing palo alto firewall logs index=pan sourcetype=connections | stats sum(bytes_in) as bytes_in sum(bytes_out) as bytes_out earliest(_time) as _time count by src_ip, dest_ip   | collect index=pan source="summarized_pan_connections" You now need to determine how often you are summarizing your logs and set up a saved search to run that query.  Once it runs you just query the data with  index=pan source="summarized_pan_connections" Another option you can have is to schedule search your dashboard panels- this means that each of the panels will run the query one time at some specified time and everyone who comes to the dashobards will get the data that was created during the scheduled search.  This is relatively simple to set up, keeps rbac rules, but if having the latest logs included on the dashboard panels is your biggest priority, this one starts to fall apart.   I have given three suggestions, in my environment I have a similar situation as you, large amount of data and looking back long periods of time is slow.  We actually run a little mixture of all of it.  We accelerate a days worth of data, then in the middle of the night, we summarize yesterdays logs.  Then when the users search the dashboard the query is a combination of the accelerated data for today's data, and the summarized data for the previous days data.   Hope this gives you some ideas of a path forward.  There will be plenty of things that you need to consider, particularly how "fresh" does the data need to be.  Is the summary of the logs good enough, can you have static data in your dashboards that refreshes every day or every hour?   
@captaincool07, it would still be the same situation for the datamodel summaries also. Considering your RBAC situation, I would not use datamodel summaries if you want to restrict the access for inde... See more...
@captaincool07, it would still be the same situation for the datamodel summaries also. Considering your RBAC situation, I would not use datamodel summaries if you want to restrict the access for indexes. You'll add a hell lot of workload on your systems for accelerating the summaries and keeping them intact.  If you want to just use tstats for faster query, it would be better to use index time field extractions for the new ingestion.  Thanks, Tejas.
Hi @Kosyay  Yes, you can use props/transforms to extract fields from the event. I wouldnt recommend modifying the event at index-time though, as this might break other extractions that exist for thi... See more...
Hi @Kosyay  Yes, you can use props/transforms to extract fields from the event. I wouldnt recommend modifying the event at index-time though, as this might break other extractions that exist for this feed, instead you can create new fields for the data you want to extract.  I havent validated this with my instance as I dont have WinEventLogs at the moment but you could try something like this: You can extract and rename these fields at index time using props.conf and transforms.conf with appropriate regex. In props.conf, configure a stanza for your sourcetype and reference a transform. In transforms.conf, use regex to capture and rename the two "Account Name" fields. Example configuration:    === props.conf === [your_sourcetype] TRANSFORMS-rename_account_names =extract_win_account_names === transforms.conf === [extract_win_account_names] REGEX = Subject:\s+Security ID:.*?Account Name:\s*([^\s]+).*?New Logon:.*?Account Name:\s*([^\s]+) FORMAT = Source_Account_Name::$1 Destination_Account_Name::$2    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello everyone, I have a network monitoring system that exports data via IPFIX using Forwarding Targets. I am trying to receive this data in Splunk using the Splunk Stream app. The add-on is instal... See more...
Hello everyone, I have a network monitoring system that exports data via IPFIX using Forwarding Targets. I am trying to receive this data in Splunk using the Splunk Stream app. The add-on is installed and Stream is enabled, but I am facing the following issues: Templates are not being received properly. The data arrives, but it's unreadable or incomplete. I need full flow data, including summaries or headers from Layer 7 (e.g., HTTP, DNS). My question: Has anyone successfully received and parsed IPFIX data in Splunk? If so, could you share the steps or configurations you used (like streamfwd.conf, input settings, etc.)? Any guidance would be greatly appreciated! Thanks in advance!
Hello! I have logs from Domain Controller Active Directory in Splunk and try to configure monitoring of user logons (EventCode=4624). Unfortunately, there are two fields with a name "Account Name" ... See more...
Hello! I have logs from Domain Controller Active Directory in Splunk and try to configure monitoring of user logons (EventCode=4624). Unfortunately, there are two fields with a name "Account Name" example of a log: 06/25/2025 02:54:32 PM LogName=Security EventCode=4624 EventType=0 ComputerName=num-dc1.boston.loc SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=881265691 Keywords=Audit Success TaskCategory=Logon OpCode=Info Message=An account was successfully logged on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Impersonation Level: Impersonation New Logon: Security ID: BOSTON\*** Account Name: *** Account Domain: BOSTON Logon ID: 0x135F601B51 Logon GUID: {12C0DD76-F92B-07E1-88A5-914C43F7D3D5} Could you please advise if it’s possible to modify the fields before indexing, i.e., at the "input" stage? Specifically, I'd like to change the first field Subject: Account Name to Source Account Name and the second field New Logon: Account Name to Destination Account Name. From what I understand, this would require modifications in props.conf and transforms.conf. If anyone has ideas on how to achieve this, please share!
Hi there, In Mission Control in our properly working Splunk environment, we see the following: This is exactly how we want it: the finding based correlation search "Threat - Findings Risk Thres... See more...
Hi there, In Mission Control in our properly working Splunk environment, we see the following: This is exactly how we want it: the finding based correlation search "Threat - Findings Risk Threshold Exceeded for Entity Over 24 Hour Period - Rule" fired because of multiple findings that occured for one specific entity. If you expand it, then it shows all the findings.  (Please ignore the weird names of the findings) Then in our other environment, it looks differently. When you click expand, it has to think for a while: And then it just shows the number of intermediate findings, but the not the actual findings themselves. You also can't click on this grey label. I suspect it has something to do with the fact that our working environment is a somewhat fresh install, whereas the environment in which it doesn't work properly is an upgrade from an old ES version to the newest version. There might be some index problems or something, I don't know. Does anyone know?
@ITWhisperer what if I use data models instead of summary index? Will it serve the same purpose? And how about RBAC control for data models? 
Hi, I have a requirement for "high jvm thread wait time monitoring" for BT's. The only metric for JVM is thread count only (Application Infrastructure Performance -> Tier -> JVM -> Threads). So appr... See more...
Hi, I have a requirement for "high jvm thread wait time monitoring" for BT's. The only metric for JVM is thread count only (Application Infrastructure Performance -> Tier -> JVM -> Threads). So appreciate your expert suggestions on enabling/configuring the metric.  
Another thing to consider with summary indexes is idempotent updates, that is, how to avoid double counting; for example, in your instance, if you created summaries for each day in your summary index... See more...
Another thing to consider with summary indexes is idempotent updates, that is, how to avoid double counting; for example, in your instance, if you created summaries for each day in your summary index, what do you do if you get events which arrive late (for whatever reason), how do you make sure they are included in the summary without double-counting the events which have already been summarised. I did a presentation on this a couple of years ago for the B-Sides program. https://github.com/bsidessplunk/2022/blob/main/Summary%20Index%20Idempotency/Bsides%20Spl22%20Summary%20Index%20Idempotency.pptx
Can I use data models here? Will it serve same as Summary index? What is reliable and best and most importantly aligns with RBAC created?
Also, you can use concepts like base search and chained search to restrict the number of searches so that less number of searches will consume the resources. And try to optimize the search to get fas... See more...
Also, you can use concepts like base search and chained search to restrict the number of searches so that less number of searches will consume the resources. And try to optimize the search to get faster results.
HI @captaincool07  Summary indexing does not natively preserve RBAC at the original index level. If you aggregate data from multiple indexes into a single summary index, users with access to the sum... See more...
HI @captaincool07  Summary indexing does not natively preserve RBAC at the original index level. If you aggregate data from multiple indexes into a single summary index, users with access to the summary index can see all summarised data, regardless of their original index permissions. This can break your RBAC model if not carefully managed. Index permissions and App permissions are managed separately, so you do have some granular control over who can access the summary. There are different ways to summarise data with Splunk - Check out https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.4/use-data-summaries-to-accelerate-searches/use-summary-indexing-for-increased-search-efficiency for more info on how to achieve this. It does sound like this would be a good candidate for summary indexing, assuming you have already looked to improve the performance of the search (e.g. can you use tstats to search the data, TERM(someString) etc)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You can have multiple "summary" indexes - perhaps one for each "primary" index and then apply RBAC to those summary indexes as well.
Yes, exactly
Summary index or any alternative Hi, I have created a dashboard with 8 panels and time frame is last 5 minutes. Kept that shorter time frame booz for this platform we are receiving large chunks of d... See more...
Summary index or any alternative Hi, I have created a dashboard with 8 panels and time frame is last 5 minutes. Kept that shorter time frame booz for this platform we are receiving large chunks of data, App team want this dashboard to The run for longer time frames may be last 7 days. If we are running for last 7 days, search is taking so much time and lot of resources getting wasted. They asked for solution to implement longer time Frame with faster results I explored and found SUMMARY index as an option but never worked on it. Can this help me? We have nearly 100+ indexes in that particular platform and sourcetype is same for all. We have RBAC implemented for each index (restricting app A users to view app B logs and viceversa ) Now if I implement Summary Index here,can this RBAC sill take effect because summary index provides data for all indexes and if it used the same in dashboard.. app A user can see app B logs by any chance or set RBAC applies here over summary index? Or else suggest other alternatives as well. At the end it should align with my RBACs created.
What would be the expected result from your sample data? 8 events and 52044 total bags or something else? | bin span=1d TIME | stats count latest("TOTAL DAILY BAGS") as TOTAL_DAILY_BAGS by TIME | st... See more...
What would be the expected result from your sample data? 8 events and 52044 total bags or something else? | bin span=1d TIME | stats count latest("TOTAL DAILY BAGS") as TOTAL_DAILY_BAGS by TIME | stats sum(count) as total_events sum(TOTAL_DAILY_BAGS) as total_daily_bags If your TIME field is not already the date (as shown in your sample), you may need to bin it first
Hi @Simona11  You could try: | timechart span=1d latest("TOTAL DAILY BAGS") as daily_bags, count as total_alarms |stats sum(total_alarms) as total_alarms, sum(daily_bags) as total_bags   | ma... See more...
Hi @Simona11  You could try: | timechart span=1d latest("TOTAL DAILY BAGS") as daily_bags, count as total_alarms |stats sum(total_alarms) as total_alarms, sum(daily_bags) as total_bags   | makeresults count=8 | streamstats count as row | eval AREA=case(row=1,"1111", row=2,"1111", row=3,"1222", row=4,"1323", row=5,"1323", row=6,"1222", row=7,"1111", row=8,"1323") | eval "ALARM DESCRIPTION"=case(row=1,"TRIGGER", row=2,"TRIGGER", row=3,"FAILURE", row=4,"FAILURE", row=5,"HAC", row=6,"FAILURE", row=7,"FAILURE", row=8,"TRIGGER") | eval "TOTAL DAILY BAGS"=case(row<=5,18600, row>5,33444) | eval TIME=case(row<=5,"2024-03-01", row>5,"2024-02-01") | eval _time=strptime(TIME,"%Y-%m-%d") | timechart span=1d latest("TOTAL DAILY BAGS") as daily_bags, count as total_alarms | stats sum(total_alarms) as total_alarms, sum(daily_bags) as total_bags  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I have a lookup table with daily records which includes: area, alarm description, date, number of bags per area and for that specific day (repetitive number). There is a timestamp for each alarm, and... See more...
I have a lookup table with daily records which includes: area, alarm description, date, number of bags per area and for that specific day (repetitive number). There is a timestamp for each alarm, and a bag column repeating the total bags for that day (same number appears multiple times because the same day has multiple alarm rows). I want to:  1) compute the total number of bags for the whole 3-month period. 2) compute the total number of alarm events (counted as total occurrences across 3 months). What is the best approach in Splunk enterprise to get both in the same final stats result? Example of scenario: AREA ALARM DESCRIPTION TOTAL DAILY BAGS TIME 1111 TRIGGER 18600 01/03/2024 1111 TRIGGER 18600 01/03/2024 1222 FAILURE 18600 01/03/2024 1323 FAILURE 18600 01/03/2024 1323 HAC 18600 01/03/2024 1222 FAILURE 33444 01/02/2024 1111 FAILURE 33444 01/02/2024 1323 TRIGGER 33444 01/02/2024
It is not so much the copy paste, it is the value used for All, this needs to be "All" not "*" (similarly with the initialValue) <form version="1.1" theme="light"> <label>All handling</label> <s... See more...
It is not so much the copy paste, it is the value used for All, this needs to be "All" not "*" (similarly with the initialValue) <form version="1.1" theme="light"> <label>All handling</label> <search id="base_search"> <query>| makeresults format=csv data="categories A B C" | table categories</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fieldset submitButton="false"> <input type="multiselect" token="categories"> <label>Categories</label> <choice value="All">All</choice> <default>All</default> <initialValue>All</initialValue> <fieldForLabel>categories</fieldForLabel> <fieldForValue>categories</fieldForValue> <search base="base_search"> <query> |stats count by categories </query> </search> <valuePrefix>testCategories="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> AND </delimiter> <change> <eval token="form.categories">case(mvcount('form.categories')=0,"All",mvcount('form.categories')&gt;1 AND mvfind('form.categories',"All")&gt;0,"All",mvcount('form.categories')&gt;1 AND mvfind('form.categories',"All")=0,mvfilter('form.categories'!="All"),1==1,'form.categories')</eval> <eval token="categories_choice">if('form.categories'=="All","categories=\"*\"",'categories')</eval> </change> </input> </fieldset> <row> <panel> <html> Categories: $categories_choice$ </html> </panel> </row> </form>
Hi Team, I am new to this community. I am working on golang integration with appdynamics. Go sdk is not available in appdynamics downloads. Can anybody help me how to get it? And, if anyone can share... See more...
Hi Team, I am new to this community. I am working on golang integration with appdynamics. Go sdk is not available in appdynamics downloads. Can anybody help me how to get it? And, if anyone can share the documentation for integration of app dynamics with golang, that would be really helpful. Thanks in advance. #AppDynamics #AppD #Golang #integration