All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, I want to send logs (which are part from our sourcetype [kube_audit]) from my HeavyForwarder to a third-party system (in my case SIEM) in syslog-format, and only those, which are caught with... See more...
Hi all, I want to send logs (which are part from our sourcetype [kube_audit]) from my HeavyForwarder to a third-party system (in my case SIEM) in syslog-format, and only those, which are caught with the regex defined. Everything else should be sent normally to my Indexers. There exists a documentation, but for my use-case there is no further description. (https://docs.splunk.com/Documentation/Splunk/9.1.3/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups , https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd ) I tried to follow the documentation and tried many things. But I end up with my third-party host receiving ALL logs of my sourcetype [kube_audit] instead only a part of it. I checked my regex, as I suspected this would be my point of failure, but there must be some other configurations I am missing, as in a simple setup, the regex works as it is. My setup for outputs, transforms and props.conf: props.conf:   [kube_audit] TRANSFORMS-routing = route_to_sentinel   transforms.conf:   [route_to_sentinel] REGEX = (?<sentinel>"verb":"create".*"impersonatedUser".*"objectRef":\{"resource":"pods".*"subresource":"exec") DEST_KEY = _SYSLOG_ROUTING FORMAT = sentinel_forwarders   outputs.conf:   [tcpout] defaultGroup = my_indexers forwardedindex.filter.disable = true indexAndForward = false useACK = true backoffOnFailure = 5 connectionTTL = 3500 writeTimeout = 100 maxConnectionsPerIndexer = 20 [tcpout:my_indexers] server=<list_of_servers> [syslog] defaultGroup = sentinel_forwarders [syslog:sentinel_forwarders] server = mythirdpartyhost:514 type = udp   Am I missing something? Any notable things I did miss? Any help is appreciated!   Best regards
I'm creating a form that requires a date input but rather than type the date and to avoid risk of typos and errors, I want to use this month view from the time picker:   Had a working version w... See more...
I'm creating a form that requires a date input but rather than type the date and to avoid risk of typos and errors, I want to use this month view from the time picker:   Had a working version with a html page but we are now 9.2.3 so no longer available.... Field will only ever require a single date, never a time, never a range, never realtime, etc. Will also ensure the correct format (Day-Month-Year) is used  - looking at you,  America    Thanks
Nice, that works sir! Apologies, I just need to update the sample data to avoid oversharing of work items. We have a lot of this fields in our environment. Do you think this is possible using transfo... See more...
Nice, that works sir! Apologies, I just need to update the sample data to avoid oversharing of work items. We have a lot of this fields in our environment. Do you think this is possible using transforms and props? Thanks!
index=myindex sourcetype=mystuff Environment=thisone "THE_TERM" | eval option="THE_TERM"
Try to avoid using join, it is slow and inefficient. Try something like this search index="box" (sourcetype="box:events" event_type=DOWNLOAD earliest=-1d) OR (sourcetype="box:file" earliest=-1) OR (... See more...
Try to avoid using join, it is slow and inefficient. Try something like this search index="box" (sourcetype="box:events" event_type=DOWNLOAD earliest=-1d) OR (sourcetype="box:file" earliest=-1) OR (sourcetype="box:folder" earliest=-1) | eval source_item_id=if(sourcetype="box:file",id,source_item_id) | eval source_parent_id=if(sourcetype="box:folder",id,source_parent_id) | eventstats values(location) as file_location by source_item_id | eventstats values(location) as folder_location by source_parent_id | where sourcetype="box:events" | table _time source_item_name source_item_id source_parent_id file_location folder_location
Where does the term come from?
What help do you need? Please explain what your issue is, and what your desired results would look like.
Found an example and this seems to work... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpsta... See more...
Found an example and this seems to work... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | eventstats sum(count) as totalCount by _time | eval percentage = round((count/totalCount)*100,3) . " %" | table _time httpstatus count percentage
typo > where L.source_item_id=L.id where L.source_item_id=R.id > where L.source_parent_id=L.id where L.source_parent_id=R.id
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, w... See more...
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, where `source_item_id` means file id, and `source_parent_id` means its folder id. The `box:file` contains `file id`, `location` events. The `box:folder` contains `folder id`, `location` events. My purpose is to resolve folder location from `box:evets` file's `some action' event. I can resolve by `box:file' with one outer join SPL like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_item_id=L.id [ search index=box sourcetype="box:file" earliest=-1 | field id location ] | table L._time L.source_item_name R.location And I can do with `box:folder` like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_parent_id=L.id [ search index=box sourcetype="box:folder" earliest=-1 | field id location ] | table L._time L.source_item_name R.location   But I don't know how integrate above two SPL into one. Please tell me some idea. Thanks in advance.
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity         This is to check all leavers in snow       index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number       Unfortunately the Shub does not add the email in the description and only user names and surnames. So I would need to search the first querys 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog | fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | search "*first*" "*last*" [ search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number ] | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity      
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_dur... See more...
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_duration
yes, that is correct. I don't want to alter the location and hostname columns. I just want to append the IP and MAC columns if it matches the hostname and host.  In addtion, I don't want to overwrite... See more...
yes, that is correct. I don't want to alter the location and hostname columns. I just want to append the IP and MAC columns if it matches the hostname and host.  In addtion, I don't want to overwrite the hostnames.csv file.  thank you
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | e... See more...
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | eventstats sum(count) as total | eval percent = (count/total)*100 . " %" | fields - total ...whose percent field is showing a percentage over entire period searched and not just the 'day'.  How can above be modified to give percentage per day for each httpstatus?
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebr... See more...
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebreaks in the output mentionen by @PickleRick remains. "1) Breaks the whole lastlog output into separate events on the default LINE_BREAKER (which means every line is treated as separate event)" So I thought I'd see if I could get that one confirmed and/or fixed as well When searching for "source=lastlog" right now I get get a list of events from each host like so: > user2 10.0.0.1 Wed Oct 30 11:20 > another_user 10.0.0.1 Wed Oct 30 11:21 > discovery 10.0.0.2 Tue Oct 29 22:19 > scanner 10.0.0.3 Mon Oct 28 21:39 > admin_user 10.0.0.4 Mon Oct 21 11:19 > root 10.0.0.1 Tue Oct 1 08:57 Before placing the TA on the HFs I would see output only containing the header > USERNAME FROM LATEST Which is completely useless  After adding the TA to the HFs this "header" line is no longer present, at all, in any events from any server. While Field names are correct and fully searchable with IP adresses, usernames etc. My question at this point is probably best formulated as "am I alright now"?  Based on the feedback in the previous post I was sort of assuming that the expected output/events should be the same as the screen output when running the script locally, i.e. one event with the entire output, like so USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 While I can see this as being easier on the eyes and easier to interpret when found, it could make processing individual filed:value pairs more problematic in searches. So what I am wondering, is everything "OK" now? Or am I still getting events with incorrect linebreaks? I don't know what the expected/correct output should be. Best regards
Hi @gcusello , did you mean that  should need to enable the below stanza: ###### Monitor Inputs for DNS ###### [MonitorNoHandle://$WINDIR\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=1... See more...
Hi @gcusello , did you mean that  should need to enable the below stanza: ###### Monitor Inputs for DNS ###### [MonitorNoHandle://$WINDIR\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=1 "While monitoring DNS logs directly with Splunk Universal Forwarder is effective, some articles suggest using Splunk Stream Forwarder apps to enhance log efficiency and analysis capabilities. what is the best practice?  
Hi @hazem , use the Splunk_TA_Windows (https://splunkbase.splunk.com/app/742) enabling the relatiove stanzas. Ciao. Giuseppe
It is not clear what you are trying to do. You use "full" in your pattern, but you mention "FULL" in your description. Do you need what is extracted to be changed to uppercase? What do you mean by "n... See more...
It is not clear what you are trying to do. You use "full" in your pattern, but you mention "FULL" in your description. Do you need what is extracted to be changed to uppercase? What do you mean by "not between"? Do you just want the word "full" if it exists in the msg_old field? Please clarify with sample (anonymised) events and a clear description of what you want to extract from where, and under what circumstances. Please also include a representation of what your expected output would look like.
Hello, How to collect DNS logs from Active Directory where the domain controllers have a DNS role
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or ... See more...
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or add-ons from deployer i get below error.   Error in pre-deploy check, uri=https://x.x.x.x8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error even the password is correct for deployer which is also same for both SHs. What could be the issue here   PS: I know splunk recommend to use 3 SH and 1 deployer. I tried it as well but have same issue.