All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately No! And after weeks yet I don't know what the problem is!
Almost bizarre that there is no repo available for a product pretending a bussiness product. 
Question is whether you don't blacklist them (to be honest, I don't remember how whitelist/blacklist interact - which one prevails). And about the thruput issue - it shouldn't drop events selectivel... See more...
Question is whether you don't blacklist them (to be honest, I don't remember how whitelist/blacklist interact - which one prevails). And about the thruput issue - it shouldn't drop events selectively - it would throttle output which in turn would throttle input so you would have a (possibly huge) lag ingesting events from this UF but it shouldn't just drop events. Dropping events could occur in an extreme case if you lagged so much that windows rotated the underlying event log so that the UF couldn't read the events from a saved checkpoint. But that's relatively unlikely and you'd notice that becuse this UF would have been significantly delayed already.
@n_hoh  Can you share your inputs.conf and event flow(like UF->HF->Idx)
@PrewinThomas need to be capturing all event IDs associated with cert services, however for testing purposes was looking specifically for 4876, 4877. And yes the CA server is running universal forwar... See more...
@PrewinThomas need to be capturing all event IDs associated with cert services, however for testing purposes was looking specifically for 4876, 4877. And yes the CA server is running universal forwarder. Unsure how to check if Splunk is dropping high-volume events so if you could point me in the right direction for that I will check on that , however looking at the event logs on the CA server would not say these events are particularly high-volume <100 in the past week across all the events for cert services.
@PickleRick the events are in the Security eventlog which other than the event IDs related to cert services e.g. 4876, 4877, 4885, 4886, 4887, 4888, 4889 can be seen in Splunk. All these event IDs ar... See more...
@PickleRick the events are in the Security eventlog which other than the event IDs related to cert services e.g. 4876, 4877, 4885, 4886, 4887, 4888, 4889 can be seen in Splunk. All these event IDs are whitelisted for the WinEventLog security channel in the inputs.conf
If I understand correctly, the events you're interested in are not in the Security eventlog but in another one (Certification Services\Operational?). Since you've probably not created an input for t... See more...
If I understand correctly, the events you're interested in are not in the Security eventlog but in another one (Certification Services\Operational?). Since you've probably not created an input for this eventlog, you're not pulling events from it. You have to create inputs.conf stanza for that particular eventlog if you want it to be pulled from the server.
@n_hoh  Which event IDs are you looking for (4886, 4887, 4888, 4889, 4885)? Assuming your CA server is running UF, Does Splunk drop high-volume events due to bandwidth throttling? If yes, try setti... See more...
@n_hoh  Which event IDs are you looking for (4886, 4887, 4888, 4889, 4885)? Assuming your CA server is running UF, Does Splunk drop high-volume events due to bandwidth throttling? If yes, try setting the throughput in limits.conf. [thruput] maxKBps = 0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this... See more...
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this in the Windows Security log, in Splunk I can see the Windows Security logs for the CA server however the Certification Services events are missing. I've confirmed in the inputs.conf that the event IDs I'm looking for are whitelisted, does anyone have any other suggestions on what can be checked?
This breaks Splunk running in Rosetta on ARM based M1,M2,M3,M4 mac computers.   Previously Splunk can be run smoothly in Rosetta emulated Linux VM on new macs.  
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Pl... See more...
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Please also open the job inspector from a search you have run with those SPL values and then open the job properties at the bottom of that page and look for earliestTime and latestTime and post those. They will be of the format 2025-07-28T00:31:00.000+01:00, not the same as your output.
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s... See more...
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = srv::$1 ver::$2 msg::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 x.y.z.k 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: srv = QQQ123-G12-W4-AB ver = iLO6 msg = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (srv and ver) are extracted correctly. The msg field only includes the first word: iLO. It seems Splunk is stopping at the first space for the msg field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your... See more...
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your "restrictions". Also you technically can edit the built-in stash sourcetype it's just very very very not recommended to do so. As I said before - you can index the summary back into the original index but it might not be the best idea due to - as I assume - greatly different amount of summary data vs. original data. So the best practice is to have a separate summary index for each group you have to grant access rights separately. There are other options which are... technically possible but noone will advise them because they have their downsides and might not work properly (at least not in all cases). Asking again and again doesn't change the fact that the proper way to go is to have separate indexes. If for some reasons you can't do that, you're left with the already described alternatives of which each has its cons.
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app ind... See more...
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app index (appA index -- opco_appA summary index also opco_appA? 
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had ... See more...
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had service as indexed field, in the summary events it will be a normal search-time extracted field. And generally you shouldn't fiddle with the default internal Splunk sourcetypes.
@PickleRick then if I do service as a indexed field.. will it solve my problem or is there any chance that this can be violated at some point? 
In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service... See more...
In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service" And if you rely in your search filter on service="your service" condition - well, that condition will be met for all events effectively rendering this part of the filter useless.
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high ... See more...
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high data burst during office time (9 to 17), delays (for very very massive log files) and cpu saturation indexers side, infrastructure can't manage all data/users interaction/scheduled jobs all at once. So Indexers stop responding durings some times. Pipelines is 1, if i raise it to 2 System collapses. Monitoring.Console delined some heavy queries during that time.range that also writes directly on some indexes. But i have my own Dashboard Monitoring.Console on SHs that delines a strong delay for heavy logs (from 15 to 90 minutes before they reach the 0 minutes delay and indexes can write queues), some blocked queues (i have 1000MB size for many queue set) and all that can the easily delines a collapsing infrastructure 🤷‍ Infrastructure grew last months, so it's time to add some servers. I began with a 2 Indexers, then 4, now i really have to go to 8/12. Also Splunk Best-Practices suggests a 12 Indexers Infrastructure for my actual data flow (2-3 TB per day). Meanwhile, i fixed actual situation disabling heavy logs and heavy jobs on SHs 🤷‍ i also lowered the thruput for UFs, from maximum to 10MB/s. System works, but disabling some features and data. Thanks all.
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}... See more...
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000   # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_service TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf   # FIELD EXTRACTION USING A REGEX [f5_waf-extract_service] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = service::$1 WRITE_META = true   # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("service_indexname_mapping.csv", json_object("service", service), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), if(isnotnull(index) and match(index, "_cont$"),  index, indexname), index), service:=null(), indexname:=null()   cat service_indexname_mapping.csv   service,indexname juniper-prod,opco_juniper_prod juniper-non-prod,opco_juniper_non_prod   This is the backend query to route logs from global index to seperate indexes through service name. How to make this service field as indexed field?
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? Wh... See more...
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? What else can he do here?