All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the a... See more...
I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the app and running it. In makeresults, even if I generate 200000 lines and run it, only 1 result comes out. However, if I put the content in the index or lookup and run it, the number of results is 7~10, etc. The desired result is 1, but multiple results come out. Is it not possible to make it so that only one is shown?
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which ... See more...
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which precludes JSON functions introduced in 8.1.  Note each of the path{} array has multiple values.  Without help of JSON functions, you need to handle that first. The most common way to do this is with mvexpand. (The input array also needs this.) | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | spath ``` data emulation above ``` | eval Tags = "[\"Tag3\", \"Tag4\"]" | foreach *Tags{} [mvexpand <<FIELD>>] | spath input=Tags | mvexpand {} | foreach *Tags{} [eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower('{}'), "<<FIELD>>", null()))] | dedup tags | stats values(tags) If your dataset is large, mvexpand has some limitations.
Hello! I am getting this error when I am trying to authenticate to Splunk Enterprise. Could someone help me with this error? Below putting screenshot.  
@rohithvr19  It looks like there is some error in the endpoint.  Can you please check logs in "splunk/var/log/splunk/python.log"?  Sharing my sample code.  KV
I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpo... See more...
I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpoint matches the security analytic, it creates an alert and is sent to a case management system for the analyst to investigate. 3.  at this point, the analyst is not able to view the sysmon logs of that particular endpoint. he will need to manually trigger the sysmon log to be indexed from the case management platform, only then he will be able to search the sysmon log on splunk for the past X number of days  4. however, the analyst will not be able to search for sysmon logs of the other unrelated endpoints.    In summary, is there a way that we can deploy and have the security detection analytics to monitor and detect across all endpoints, yet only allowing the security analyst to only have the ability to search for the sysmon logs of the endpoint which triggered the alert based on an adhoc request via the case management system?
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIEL... See more...
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIELD>>"), tags)] | stats values(tags) Note that mvfind uses regex so you may get some odd results if your tags have special characters in them  
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity l... See more...
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity logs from Power BI to Splunk. Does anyone have any experience with that?
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA f... See more...
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA for VMware" be used with ITSI? 
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documenta... See more...
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Handlemanagernodefailure
I'm running Splunk on Windows and don't have the tcpdump command.
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my curre... See more...
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my current role, I cannot edit any .conf files. Those are administrated by a server team.  If there is an app, I can request it be installed, else I'm working solely out of the SPL. 
so if its forwarding, there should be a splunkd.log that is recent?
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful tha... See more...
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful that i can use also as we get event code in windows forwarders so i block unwanted  event codes giving repeated information but in logstash data what we can do if i want to do something like this. How to take out information which we can use in splunk?
Does anyone know how to do this on Splunk v8.0.5?
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr... See more...
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr as a peer to my existing SH so that i can access the data in ES?
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoin... See more...
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoint. Take care, the results may be (very) high arity and splunkd doesn't (yet?) have very strong protections for itself (in terms of RAM used while searching) when using this code path, so it is (IMHO) easy to crush your indexer tier's memory and cause lots of thrashing.  
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experiment... See more...
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experimenting a bit more, I now have this, but this doesn't work either: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 $0 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp),time_temp+" "+_raw,time_temp+subsecond_temp+" "+_raw) Plus props.conf: [default] ADD_EXTRA_TIME_FIELDS = none ANNOTATE_PUNCT = false SHOULD_LINEMERGE = false TRANSFORMS-zza-syslog = syslog_canforward, reformat_metadata, md_add_separator, md_source, md_sourcetype, md_index, md_host, md_subsecond, md_time, md_fix_subsecond, discard_empty_msg # The following applies for TCP destinations where the IETF frame is required TRANSFORMS-zzz-syslog = syslog_octet_count, octet_count_prepend # Comment out the above and uncomment the following for udp #TRANSFORMS-zzz-syslog-udp = syslog_octet_count, octet_count_prepend, discard_empty_msg [audittrail] # We can't transform this source type its protected TRANSFORMS-zza-syslog = TRANSFORMS-zzz-syslog = However this now breaks logging and I'm getting no logs forwarded to sylsog-ng. The connection is up, but no meaningful data arrives, just "empty" packages. What may be the problem? Did I break the sequence of the stanzas? (I did not seem to understand it in the first place, as they seem to be in backward order compared to how the KV pairs follow each other in the actual log message.)
Hi @Osama.Abbas, Thanks for asking your question on the community. I have shared this information with the Docs team with a ticket. I will post a reply when I've heard back from them. 
Hi @Stephen.Knott, Did you see the most recent reply from Michael?
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way ... See more...
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way to expand from a single AIO server is either as @isoutamo wrote (which is a bit more complicated to do as a single migration. or the other way: 1) Add another host as search head, migrate search-time settings there. Leave your old server as indexer. Verify if everything is working properly. 2) Add a CM, add your indexer as a peer to the CM. You might either set RF=SF=1 for starters and then raise it later when you add another peer or you can add another indexer at this step. The trick here is that your already indexed data is not clustered and while it should be searchable it will not get replicated.