All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

typo > where L.source_item_id=L.id where L.source_item_id=R.id > where L.source_parent_id=L.id where L.source_parent_id=R.id
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, w... See more...
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, where `source_item_id` means file id, and `source_parent_id` means its folder id. The `box:file` contains `file id`, `location` events. The `box:folder` contains `folder id`, `location` events. My purpose is to resolve folder location from `box:evets` file's `some action' event. I can resolve by `box:file' with one outer join SPL like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_item_id=L.id [ search index=box sourcetype="box:file" earliest=-1 | field id location ] | table L._time L.source_item_name R.location And I can do with `box:folder` like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_parent_id=L.id [ search index=box sourcetype="box:folder" earliest=-1 | field id location ] | table L._time L.source_item_name R.location   But I don't know how integrate above two SPL into one. Please tell me some idea. Thanks in advance.
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity         This is to check all leavers in snow       index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number       Unfortunately the Shub does not add the email in the description and only user names and surnames. So I would need to search the first querys 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog | fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | search "*first*" "*last*" [ search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number ] | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity      
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_dur... See more...
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_duration
yes, that is correct. I don't want to alter the location and hostname columns. I just want to append the IP and MAC columns if it matches the hostname and host.  In addtion, I don't want to overwrite... See more...
yes, that is correct. I don't want to alter the location and hostname columns. I just want to append the IP and MAC columns if it matches the hostname and host.  In addtion, I don't want to overwrite the hostnames.csv file.  thank you
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | e... See more...
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | eventstats sum(count) as total | eval percent = (count/total)*100 . " %" | fields - total ...whose percent field is showing a percentage over entire period searched and not just the 'day'.  How can above be modified to give percentage per day for each httpstatus?
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebr... See more...
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebreaks in the output mentionen by @PickleRick remains. "1) Breaks the whole lastlog output into separate events on the default LINE_BREAKER (which means every line is treated as separate event)" So I thought I'd see if I could get that one confirmed and/or fixed as well When searching for "source=lastlog" right now I get get a list of events from each host like so: > user2 10.0.0.1 Wed Oct 30 11:20 > another_user 10.0.0.1 Wed Oct 30 11:21 > discovery 10.0.0.2 Tue Oct 29 22:19 > scanner 10.0.0.3 Mon Oct 28 21:39 > admin_user 10.0.0.4 Mon Oct 21 11:19 > root 10.0.0.1 Tue Oct 1 08:57 Before placing the TA on the HFs I would see output only containing the header > USERNAME FROM LATEST Which is completely useless  After adding the TA to the HFs this "header" line is no longer present, at all, in any events from any server. While Field names are correct and fully searchable with IP adresses, usernames etc. My question at this point is probably best formulated as "am I alright now"?  Based on the feedback in the previous post I was sort of assuming that the expected output/events should be the same as the screen output when running the script locally, i.e. one event with the entire output, like so USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 While I can see this as being easier on the eyes and easier to interpret when found, it could make processing individual filed:value pairs more problematic in searches. So what I am wondering, is everything "OK" now? Or am I still getting events with incorrect linebreaks? I don't know what the expected/correct output should be. Best regards
Hi @gcusello , did you mean that  should need to enable the below stanza: ###### Monitor Inputs for DNS ###### [MonitorNoHandle://$WINDIR\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=1... See more...
Hi @gcusello , did you mean that  should need to enable the below stanza: ###### Monitor Inputs for DNS ###### [MonitorNoHandle://$WINDIR\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=1 "While monitoring DNS logs directly with Splunk Universal Forwarder is effective, some articles suggest using Splunk Stream Forwarder apps to enhance log efficiency and analysis capabilities. what is the best practice?  
Hi @hazem , use the Splunk_TA_Windows (https://splunkbase.splunk.com/app/742) enabling the relatiove stanzas. Ciao. Giuseppe
It is not clear what you are trying to do. You use "full" in your pattern, but you mention "FULL" in your description. Do you need what is extracted to be changed to uppercase? What do you mean by "n... See more...
It is not clear what you are trying to do. You use "full" in your pattern, but you mention "FULL" in your description. Do you need what is extracted to be changed to uppercase? What do you mean by "not between"? Do you just want the word "full" if it exists in the msg_old field? Please clarify with sample (anonymised) events and a clear description of what you want to extract from where, and under what circumstances. Please also include a representation of what your expected output would look like.
Hello, How to collect DNS logs from Active Directory where the domain controllers have a DNS role
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or ... See more...
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or add-ons from deployer i get below error.   Error in pre-deploy check, uri=https://x.x.x.x8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error even the password is correct for deployer which is also same for both SHs. What could be the issue here   PS: I know splunk recommend to use 3 SH and 1 deployer. I tried it as well but have same issue.
The problem with using made-up fake field names is that any proposed solution might not match your actual usecase. However, here is a solution using the names you provided - hopefully you will get th... See more...
The problem with using made-up fake field names is that any proposed solution might not match your actual usecase. However, here is a solution using the names you provided - hopefully you will get the idea and be able to adapt it to your actual usecase | makeresults format=csv data="sample_1_country,sample_2_country,sample_99_country,sample_37_country Denmark,Chile,Thailand,Croatia" | foreach sample_*_country [| eval sample_country_name=if(isnull(sample_country_name),<<FIELD>>,mvappend(sample_country_name,<<FIELD>>))] | eval sample_country_name=mvjoin(sample_country_name,",")    
Hi @splunkerarijit  I could see that  this is a known issue with latest version of ES and already reported in Splunk and they have provided the workaround as well. Please refer below doc for more... See more...
Hi @splunkerarijit  I could see that  this is a known issue with latest version of ES and already reported in Splunk and they have provided the workaround as well. Please refer below doc for more info https://docs.splunk.com/Documentation/ES/7.3.2/RN/KnownIssues  If this helps, please upvote or accept solution if it solved
@souha  Splunk SOAR (On-premises) supports these operating systems and versions: Red Hat Enterprise Linux 7.6 through 7.9 Red Hat Enterprise Linux 8.0 and any of the minor versions of 8. ... See more...
@souha  Splunk SOAR (On-premises) supports these operating systems and versions: Red Hat Enterprise Linux 7.6 through 7.9 Red Hat Enterprise Linux 8.0 and any of the minor versions of 8. You can use the most recent minor release of RHEL 8 that is available at the time of the Splunk SOAR (On-premises) release. Amazon Linux 2 Oracle Linux 8 If you are unable to use any of these then you should raise a support case to see if they can help.  I think you could edit the install script to allow for another *nix OS but then you would be out of any support entitlement. 
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7... See more...
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7 and 8, and the provided  installation for splunk soar are supported on these versions only. What should I do?
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_coun... See more...
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_country_1_name to sample_1_country_99_name. Example: sample_1_country sample_2_country sample_99_country sample_37_country Denmark Chile Thailand Croatia Result sample_country_name Denmark, Chile, Thailand, Croatia Thanks!  
Hi @nabeel652 , for my knowledge, you can schedule a search using cron on Tuesday, but not on the second Tuesday. To do this, the only way is adding a contrain to the search. Ciao. Giuseppe
Unfortunately  it is not a fixed term or field.   It is just a random term for a search.  Similar to using a search in MS Word for "FOO" in a 10,000 page document.  Now I am trying to figure out how ... See more...
Unfortunately  it is not a fixed term or field.   It is just a random term for a search.  Similar to using a search in MS Word for "FOO" in a 10,000 page document.  Now I am trying to figure out how to make that useful in the table as a result.  I have tried an input file this morning but not familiar with working with that.   Table desired.... Environment userid option abc defgh THE TERM
Hello This really sums it all up to me.  index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | stats latest(_time) as lastSeen by hostname, sourceIp, fwdType, guid, vers... See more...
Hello This really sums it all up to me.  index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | stats latest(_time) as lastSeen by hostname, sourceIp, fwdType, guid, version, build, os, arch | eval lastSeenFormatted = strftime(lastSeen, "%Y-%m-%d %H:%M:%S") | eval timeDifferenceSec = now() - lastSeen | eval timeSinceLastSeen = tostring(floor(timeDifferenceSec / 3600)) . "h " . tostring(round((timeDifferenceSec % 3600) / 60)) . "m" | table hostname, sourceIp, fwdType, guid, version, build, os, arch, lastSeenFormatted, timeSinceLastSeen