All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Don't you have some search-time field defined overriding the original one? What does the search log say (especially the LISPY part) when you search for a specific sourcetype?
Hi @livehybrid  I'll assess again with the SQS-s3 Connector, and I'll need to ingest both historic data as well as ongoing data stream. By the initial observations I think I'll need to use multiple... See more...
Hi @livehybrid  I'll assess again with the SQS-s3 Connector, and I'll need to ingest both historic data as well as ongoing data stream. By the initial observations I think I'll need to use multiple SQS-s3 Connectors or would need to use Lambda to process those into single SQS-s3 Connector. Please let me know if there's any other alternative to this assumption. Thanks!
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happeni... See more...
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happening. Can anyone tell me how to send emails from SOAR using "Passwordless" method? Unable to find the instructions or SOP on Splunk.   I've tested the connectivity over port 25 towards the SMTP server, and it's working.
Hi all, I want to extract fields from a custom log format. Here's my transforms.conf: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.... See more...
Hi all, I want to extract fields from a custom log format. Here's my transforms.conf: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = name::$1 version::$2 message::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 1.2.3.4 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: name = QQQ123-G12-W4-AB version = iLO6 message = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (name and version) are extracted correctly. The message field only includes the first word: iLO. It seems Splunk is stopping at the first space for the message field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
Unfortunately No! And after weeks yet I don't know what the problem is!
Almost bizarre that there is no repo available for a product pretending a bussiness product. 
Question is whether you don't blacklist them (to be honest, I don't remember how whitelist/blacklist interact - which one prevails). And about the thruput issue - it shouldn't drop events selectivel... See more...
Question is whether you don't blacklist them (to be honest, I don't remember how whitelist/blacklist interact - which one prevails). And about the thruput issue - it shouldn't drop events selectively - it would throttle output which in turn would throttle input so you would have a (possibly huge) lag ingesting events from this UF but it shouldn't just drop events. Dropping events could occur in an extreme case if you lagged so much that windows rotated the underlying event log so that the UF couldn't read the events from a saved checkpoint. But that's relatively unlikely and you'd notice that becuse this UF would have been significantly delayed already.
@n_hoh  Can you share your inputs.conf and event flow(like UF->HF->Idx)
@PrewinThomas need to be capturing all event IDs associated with cert services, however for testing purposes was looking specifically for 4876, 4877. And yes the CA server is running universal forwar... See more...
@PrewinThomas need to be capturing all event IDs associated with cert services, however for testing purposes was looking specifically for 4876, 4877. And yes the CA server is running universal forwarder. Unsure how to check if Splunk is dropping high-volume events so if you could point me in the right direction for that I will check on that , however looking at the event logs on the CA server would not say these events are particularly high-volume <100 in the past week across all the events for cert services.
@PickleRick the events are in the Security eventlog which other than the event IDs related to cert services e.g. 4876, 4877, 4885, 4886, 4887, 4888, 4889 can be seen in Splunk. All these event IDs ar... See more...
@PickleRick the events are in the Security eventlog which other than the event IDs related to cert services e.g. 4876, 4877, 4885, 4886, 4887, 4888, 4889 can be seen in Splunk. All these event IDs are whitelisted for the WinEventLog security channel in the inputs.conf
If I understand correctly, the events you're interested in are not in the Security eventlog but in another one (Certification Services\Operational?). Since you've probably not created an input for t... See more...
If I understand correctly, the events you're interested in are not in the Security eventlog but in another one (Certification Services\Operational?). Since you've probably not created an input for this eventlog, you're not pulling events from it. You have to create inputs.conf stanza for that particular eventlog if you want it to be pulled from the server.
@n_hoh  Which event IDs are you looking for (4886, 4887, 4888, 4889, 4885)? Assuming your CA server is running UF, Does Splunk drop high-volume events due to bandwidth throttling? If yes, try setti... See more...
@n_hoh  Which event IDs are you looking for (4886, 4887, 4888, 4889, 4885)? Assuming your CA server is running UF, Does Splunk drop high-volume events due to bandwidth throttling? If yes, try setting the throughput in limits.conf. [thruput] maxKBps = 0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this... See more...
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this in the Windows Security log, in Splunk I can see the Windows Security logs for the CA server however the Certification Services events are missing. I've confirmed in the inputs.conf that the event IDs I'm looking for are whitelisted, does anyone have any other suggestions on what can be checked?
This breaks Splunk running in Rosetta on ARM based M1,M2,M3,M4 mac computers.   Previously Splunk can be run smoothly in Rosetta emulated Linux VM on new macs.  
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Pl... See more...
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Please also open the job inspector from a search you have run with those SPL values and then open the job properties at the bottom of that page and look for earliestTime and latestTime and post those. They will be of the format 2025-07-28T00:31:00.000+01:00, not the same as your output.
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s... See more...
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = srv::$1 ver::$2 msg::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 x.y.z.k 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: srv = QQQ123-G12-W4-AB ver = iLO6 msg = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (srv and ver) are extracted correctly. The msg field only includes the first word: iLO. It seems Splunk is stopping at the first space for the msg field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your... See more...
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your "restrictions". Also you technically can edit the built-in stash sourcetype it's just very very very not recommended to do so. As I said before - you can index the summary back into the original index but it might not be the best idea due to - as I assume - greatly different amount of summary data vs. original data. So the best practice is to have a separate summary index for each group you have to grant access rights separately. There are other options which are... technically possible but noone will advise them because they have their downsides and might not work properly (at least not in all cases). Asking again and again doesn't change the fact that the proper way to go is to have separate indexes. If for some reasons you can't do that, you're left with the already described alternatives of which each has its cons.
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app ind... See more...
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app index (appA index -- opco_appA summary index also opco_appA? 
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had ... See more...
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had service as indexed field, in the summary events it will be a normal search-time extracted field. And generally you shouldn't fiddle with the default internal Splunk sourcetypes.
@PickleRick then if I do service as a indexed field.. will it solve my problem or is there any chance that this can be violated at some point?