All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you Will, much appreciated. John
You're expecting Splunk to "pick up" the rewritten sourcetype and apply transforms defined for it, right? It doesn't work that way. An event's path through the ingestion pipeline is determined at th... See more...
You're expecting Splunk to "pick up" the rewritten sourcetype and apply transforms defined for it, right? It doesn't work that way. An event's path through the ingestion pipeline is determined at the start by its sourcetype/source/host triplet. Anything you do to those field during ingestio  doesn't change the processing path - it can only affect search-time operations later. You can use CLONE_SOURCETYPE to make event go through (almost) whole ingestion pipeline again with a new sourcetype but the caveat is that CLONE_SOURCETYPE doesn't work selectively - you can't limit its scope with regex. So its usage is fairly complicated and I wouldn't advise it.
Hello I am running search index=_introspection dedup host  table host in result i am not able to see one indexer and one search head while other indexers and sh are visible .
If you are seeing Invalid key in stanza (on start up) then check for typos [indexer_discovery:master] pass4SymmKey = ... Refer to https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputs... See more...
If you are seeing Invalid key in stanza (on start up) then check for typos [indexer_discovery:master] pass4SymmKey = ... Refer to https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf#outputs.conf.spec
You need to describe the logic from the input to the desired output.  There are at least two possible ways to match hostname1 and hostname2: Match by position.  This is the route @ITWhisperer takes... See more...
You need to describe the logic from the input to the desired output.  There are at least two possible ways to match hostname1 and hostname2: Match by position.  This is the route @ITWhisperer takes. Match by hostname. If the requirement is to match by name, this is one way to do it. | foreach hostname1 hostname2 [eval matchhost = if(isnull(matchhost) OR mvcount(<<FIELD>>) > mvcount(matchhost), <<FIELD>>, matchhost)] | mvexpand matchhost | foreach hostname1 hostname2 [eval <<FIELD>> = mvindex(<<FIELD>>, mvfind(<<FIELD>>, matchhost))] | fields - matchhost
Hi @PickleRick - Yes i did the changes accordingly. Now i am facing below 1. Able to get the expected results running without sourcetype, but while running the search with sourcetype=nix:messages O... See more...
Hi @PickleRick - Yes i did the changes accordingly. Now i am facing below 1. Able to get the expected results running without sourcetype, but while running the search with sourcetype=nix:messages OR sourcetype=fortigate_traffic 0 results returning. 2. the host extraction from the source which was there earlier now it's not working props.conf ### to send traffic and non-traffic events ### [source::.../TUC-*/OOB/TUC-*(50M)*.log] TRANSFORMS-routing = route_nix_messages, route_fortigate_traffic TRANSFORMS-sourcetype = set_nix_sourcetype_if_not_traffic, set_fortigate_sourcetype_if_routed ### to extract host from source ### [nix:messages] TRANSFORMS-set_host = set_custom_host [fortigate_traffic] TRANSFORMS-set_host = set_custom_host transforms.conf ### to send traffic and non-traffic events ### [route_nix_messages] DEST_KEY = _MetaData:Index REGEX = .* FORMAT = os_linux [set_nix_sourcetype_if_not_traffic] DEST_KEY = MetaData:Sourcetype REGEX = .* FORMAT = nix:messages [route_fortigate_traffic] DEST_KEY = _MetaData:Index REGEX = (?i)\b(traffic|session|firewall|deny|accept)\b FORMAT = nw_fortigate [set_fortigate_sourcetype_if_routed] DEST_KEY = MetaData:Sourcetype REGEX = (?i)\b(traffic|session|firewall|deny|accept)\b FORMAT = fortigate_traffic ### to extract host from source ### [set_custom_host] REGEX = /TUC-[^/]+/[^/\n]+/([^-\n]+(?:-[^-\n]+){0,5})-(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})-\d{2}-\d{2}-\d{4}\.log FORMAT = host::$1 DEST_KEY = MetaData:Host SOURCE_KEY = MetaData:Source
@samalchow  The problem is likely that the program (Splunk UF) doesn't have the right access. Windows keeps very strict records of security events, and needs to be allowed to see them. As windows u... See more...
@samalchow  The problem is likely that the program (Splunk UF) doesn't have the right access. Windows keeps very strict records of security events, and needs to be allowed to see them. As windows user account either local account or domain account with: Full control over the Splunk installation directory. Read access to any files that you want to index. e.g. Windows application, security, system, application logs, MSSQL logs and so on. For standard naming convention it's recommended to create user "Splunk" with the required privileges. Required Local/Domain Security Policy user rights assignments for the splunkd or splunkforwarder services Permission to log on as a service. Permission to log on as a batch job Permission to replace a process-level token. Permission to act as part of the operating system. Permission to bypass traverse checking Additional activity for data required in use-cases Enable Windows event and audit logs Turn on Process Tracking in your Windows Audit logs (Event ID 4688) Audit process tracking | Microsoft Learn Windows Security Event ID 4648 tracks the explicit use of credentials, as in a run as event or batch login from a scheduled task. You can enable this from your Windows Logon Event policy configuration  
| spath appliedConditionalAccessPolicies{} output=appliedConditionalAccessPolicies | mvexpand appliedConditionalAccessPolicies | spath input=appliedConditionalAccessPolicies
You've already been told - transforms are not ACL-s. It's not that first matching transforms runs and execution stops. No, it's the other way around. Every configured transform is executed (as long a... See more...
You've already been told - transforms are not ACL-s. It's not that first matching transforms runs and execution stops. No, it's the other way around. Every configured transform is executed (as long as its regex matches; with some exceptions like CLONE_SOURCETYPE but let's not dig into this here, I'm just listing it for completness). So if you _first_ redirect some events to nw_fortigate index and then have a transform redirecting all events to os_linux index... all events will end up in os_linux.
1. We don't know what data you're running your search over. 2. Ars you sure you're using dedup right? 3. If you run the search manually, what results does it return?
Extract appliedConditionalAccessPolicies as a whole, expand the multivalued field, then extract each row separately.
Hmm... there used to be stencils for Visio but the page no longer works.
  Hello folks, I have a series of event results which take the format as shown below: appDisplayName: foo appId: foo0 appliedConditionalAccessPolicies: [ [-] { [-] displayName... See more...
  Hello folks, I have a series of event results which take the format as shown below: appDisplayName: foo appId: foo0 appliedConditionalAccessPolicies: [ [-] { [-] displayName: All Users Require MFA All Apps enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo1 result: success } { [-] displayName: macOS Conditional Access Policy enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo2 result: success } { [-] displayName: Global-Restrict enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo3 result: notApplied } { [-] displayName: All_user_risk_policy enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo4 result: notApplied Is there a way to cycle through the specific event to extract and maintain the correlation of field:value and then repeat for one or more event blocks? Effectively it would look like this: displayName: All Users Require MFA All Apps - id: foo1 - result: success displayName: macOS Conditional Access Policy - id: foo2 - result: success displayName: Global-Restrict - id: foo3 - result: notApplied displayName: All_user_risk_policy - id: foo4 - result: notApplied Thank you to all.
If other logs from that forwarder (especially other winevent logs) are properly forwarded, it's most probably a permissions issue. The user UF is running as must be able to read Security log (as typi... See more...
If other logs from that forwarder (especially other winevent logs) are properly forwarded, it's most probably a permissions issue. The user UF is running as must be able to read Security log (as typically access to this log is limited whereas to System or Application is much wider open). The rights to eventlog channels are assigned by crafting proper acl in a specific registry key (sorry, don't remember which one) but if your ACLs haven't been tampered with, you can simply add the UF's user to the Event Log Readers local group.
Hi @samalchow  Are you getting logs into _internal for the other hosts? This might help determine if the issue is with the inputs or the outputs of the UFs. Please let me know how you get on and co... See more...
Hi @samalchow  Are you getting logs into _internal for the other hosts? This might help determine if the issue is with the inputs or the outputs of the UFs. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I’ve inherited a fleet of about 150 Windows Servers, all configured identically — same Deployment Server, TAs, inputs.conf/outputs.conf, etc. Out of the 150, around 10-12 systems are sending most Win... See more...
I’ve inherited a fleet of about 150 Windows Servers, all configured identically — same Deployment Server, TAs, inputs.conf/outputs.conf, etc. Out of the 150, around 10-12 systems are sending most Windows logs as expected, except for Security logs (WinEventLog:Security). I’ve already tried the basics like rebooting and reinstalling the forwarder, but no go. I’m leaning toward a possible permissions issue but not sure where to start troubleshooting from here.
How did you get your Splunk Enterprise to run on only 127.0.0.1:8000? By default Splunk should be exposed on other interfaces. If you try accessing your Splunk Enterprise instance using the IP addres... See more...
How did you get your Splunk Enterprise to run on only 127.0.0.1:8000? By default Splunk should be exposed on other interfaces. If you try accessing your Splunk Enterprise instance using the IP address of your Splunk server as seen on your local network (I assume 192.168.0.112?), does it load?
Hi @ayomotukoya  If you're using draw.io then check out https://github.com/livehybrid/splunk_drawio_icons If not you can crop the images you need from https://docs.splunk.com/images/8/8b/Splunk_Doc... See more...
Hi @ayomotukoya  If you're using draw.io then check out https://github.com/livehybrid/splunk_drawio_icons If not you can crop the images you need from https://docs.splunk.com/images/8/8b/Splunk_Documentation_Icons_August2018.png  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
This docs page contains a link to a Transparent PNG of icons you can use to draw your deployment architecture. https://docs.splunk.com/Documentation/Splunk/9.4.1/InheritedDeployment/Diagramyourdeplo... See more...
This docs page contains a link to a Transparent PNG of icons you can use to draw your deployment architecture. https://docs.splunk.com/Documentation/Splunk/9.4.1/InheritedDeployment/Diagramyourdeployment
Ok. I think everyone involved assumed silently that your IF is a HF because... well, because that's how you do it when you want to use transforms. Yes, as you noticed, when you enable local processin... See more...
Ok. I think everyone involved assumed silently that your IF is a HF because... well, because that's how you do it when you want to use transforms. Yes, as you noticed, when you enable local processing, UF can do some props/transforms but I don't remember having it properly described in docs so we don't know if everything is working exactly as it would on a HF or not. I'm not sure if UF is even capable of doing what you want it to do. Normally a full Splunk instance has a setting for splunktcp input which directs data into proper queue depending on whether the data is cooked or cooked and parsed. I suppose either UF is missing this or might not even be able to send the data from external input into typingQueue.