All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpo... See more...
I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpoint matches the security analytic, it creates an alert and is sent to a case management system for the analyst to investigate. 3.  at this point, the analyst is not able to view the sysmon logs of that particular endpoint. he will need to manually trigger the sysmon log to be indexed from the case management platform, only then he will be able to search the sysmon log on splunk for the past X number of days  4. however, the analyst will not be able to search for sysmon logs of the other unrelated endpoints.    In summary, is there a way that we can deploy and have the security detection analytics to monitor and detect across all endpoints, yet only allowing the security analyst to only have the ability to search for the sysmon logs of the endpoint which triggered the alert based on an adhoc request via the case management system?
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIEL... See more...
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIELD>>"), tags)] | stats values(tags) Note that mvfind uses regex so you may get some odd results if your tags have special characters in them  
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity l... See more...
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity logs from Power BI to Splunk. Does anyone have any experience with that?
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA f... See more...
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA for VMware" be used with ITSI? 
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documenta... See more...
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Handlemanagernodefailure
I'm running Splunk on Windows and don't have the tcpdump command.
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my curre... See more...
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my current role, I cannot edit any .conf files. Those are administrated by a server team.  If there is an app, I can request it be installed, else I'm working solely out of the SPL. 
so if its forwarding, there should be a splunkd.log that is recent?
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful tha... See more...
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful that i can use also as we get event code in windows forwarders so i block unwanted  event codes giving repeated information but in logstash data what we can do if i want to do something like this. How to take out information which we can use in splunk?
Does anyone know how to do this on Splunk v8.0.5?
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr... See more...
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr as a peer to my existing SH so that i can access the data in ES?
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoin... See more...
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoint. Take care, the results may be (very) high arity and splunkd doesn't (yet?) have very strong protections for itself (in terms of RAM used while searching) when using this code path, so it is (IMHO) easy to crush your indexer tier's memory and cause lots of thrashing.  
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experiment... See more...
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experimenting a bit more, I now have this, but this doesn't work either: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 $0 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp),time_temp+" "+_raw,time_temp+subsecond_temp+" "+_raw) Plus props.conf: [default] ADD_EXTRA_TIME_FIELDS = none ANNOTATE_PUNCT = false SHOULD_LINEMERGE = false TRANSFORMS-zza-syslog = syslog_canforward, reformat_metadata, md_add_separator, md_source, md_sourcetype, md_index, md_host, md_subsecond, md_time, md_fix_subsecond, discard_empty_msg # The following applies for TCP destinations where the IETF frame is required TRANSFORMS-zzz-syslog = syslog_octet_count, octet_count_prepend # Comment out the above and uncomment the following for udp #TRANSFORMS-zzz-syslog-udp = syslog_octet_count, octet_count_prepend, discard_empty_msg [audittrail] # We can't transform this source type its protected TRANSFORMS-zza-syslog = TRANSFORMS-zzz-syslog = However this now breaks logging and I'm getting no logs forwarded to sylsog-ng. The connection is up, but no meaningful data arrives, just "empty" packages. What may be the problem? Did I break the sequence of the stanzas? (I did not seem to understand it in the first place, as they seem to be in backward order compared to how the KV pairs follow each other in the actual log message.)
Hi @Osama.Abbas, Thanks for asking your question on the community. I have shared this information with the Docs team with a ticket. I will post a reply when I've heard back from them. 
Hi @Stephen.Knott, Did you see the most recent reply from Michael?
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way ... See more...
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way to expand from a single AIO server is either as @isoutamo wrote (which is a bit more complicated to do as a single migration. or the other way: 1) Add another host as search head, migrate search-time settings there. Leave your old server as indexer. Verify if everything is working properly. 2) Add a CM, add your indexer as a peer to the CM. You might either set RF=SF=1 for starters and then raise it later when you add another peer or you can add another indexer at this step. The trick here is that your already indexed data is not clustered and while it should be searchable it will not get replicated.
If this a production or anything else than your lab environment, then you should configure TLS into use on those connections. There are instructions on securing your splunk environment guide and also ... See more...
If this a production or anything else than your lab environment, then you should configure TLS into use on those connections. There are instructions on securing your splunk environment guide and also there is conf23 presentation about TLS slippery or something similar.
If I recall right you must “enable” REST api first with support ticket or using ACS by enabling some networks to search-api. https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCl... See more...
If I recall right you must “enable” REST api first with support ticket or using ACS by enabling some networks to search-api. https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud After that you can query via rest endpoints.
When your starting point is AIO and you want to go single sh + indexer cluster and you want to keep your old data then the steps in high level are  install cm and configure it  add current AIO nod... See more...
When your starting point is AIO and you want to go single sh + indexer cluster and you want to keep your old data then the steps in high level are  install cm and configure it  add current AIO node as 1st peer add 2nd peer add a new SH copy needed apps etc from AIO into new SH Please check the exact steps and how to do those from @gcusello ‘s pointed document. There are detailed instructions how configure your CM, how to add peers, when and how to copy apps, when to remove unnecessary apps from old AIO node before use it as search peer etc.
No you cannot use indexed_extractions or kv_mode as the event is not json, only a part of the event is.  The way I have gone about this, is to extract the json bit automatically using props/transfor... See more...
No you cannot use indexed_extractions or kv_mode as the event is not json, only a part of the event is.  The way I have gone about this, is to extract the json bit automatically using props/transforms, so the json bit ends up in its own field, then can be worked on.  Otherwise I would look at if you really need to extract all the json, or just extract known important field values by extracting their key value pairs with regex, or even look at using ingest eval to extract the non json bits to fields, then dump them, then only keep the json..but it really all depends on the end user and their requirements/use case Either way this is a custom data onboarding...will require some work to get your use case done.. I usually ask why the sender is not using properly formatted json events to begin with...or if they can just insert kv pairs like "foo=bar" instead of the useless json blob thrown into a nonstructured event ...shoving json into a non json event is not really the flex people think it is....but hey thats what devs do these days...either way cleaning up log formats can be hard so may have to just find a way that works for this end user.  I know the pain I have to deal with this in OTel Collector events like this:   2025-01-09T20:29:14.015Z info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Post \"https://http-inputs.foo.splunkcloud.com/services/collector\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", "interval": "5.105973834s"}   AFAIK there is no way you can deal with this so that a user doesnt have to spath the field, unless you hide it in a dashboard or fundamentally change the format of the event thats indexed.