All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have filed "Labels" with multiple value in the single filed. I need to see only OS value red hat(linux) or windows 2019 I  tried eval in SPL but as a result I gut eather first value or empty cell... See more...
I have filed "Labels" with multiple value in the single filed. I need to see only OS value red hat(linux) or windows 2019 I  tried eval in SPL but as a result I gut eather first value or empty cell. Thank you. Please see eval statement and sample data below. -------- | eval Labels= split (Labels, " ") ------------------------------ Sample data before eval development red hat(linux) main_ucmdb or contingency production red hat(linux) main_ucmdb or  production windows 2019 wintel server microsoft windows server 2019 standard main_ucmdb
As @scelikok said, your description is a bit vague and without a sample (anonymized if needed - we don't need your internal secrets ;-)) of your data and description of what data you want to get from... See more...
As @scelikok said, your description is a bit vague and without a sample (anonymized if needed - we don't need your internal secrets ;-)) of your data and description of what data you want to get from that it's pretty much impossible to help you because we have no idea what we're talking about.
OK. Maybe you misunderstand how Splunk works. You don't "connect splunk to a linux server". You install UF on a server and (and that might be one of the parts you're missing) you're making it send ev... See more...
OK. Maybe you misunderstand how Splunk works. You don't "connect splunk to a linux server". You install UF on a server and (and that might be one of the parts you're missing) you're making it send events to Splunk. So, did you verify any of those things I asked you earlier?
Hi @pranay03, Since your otel collector is already installed without start_at=beginning parameter, you should remove otel , delete checkpoint files on your nodes under "/var/addon/splunk/otel_pos/" ... See more...
Hi @pranay03, Since your otel collector is already installed without start_at=beginning parameter, you should remove otel , delete checkpoint files on your nodes under "/var/addon/splunk/otel_pos/" folder, and install otel again.. This should make otel to re-read all files. 
This is due to issues with load balancers sitting in front of the hosts. Check the stickiness of the load balancers. Accessing a SH member directly should solve it.
We are using 'Splunk App for Lookup File Editing' version 4.0.1.  There are two issues that bother me.  First is the continuous popup about 'Save Backup'.  I need a way of turning this off since most... See more...
We are using 'Splunk App for Lookup File Editing' version 4.0.1.  There are two issues that bother me.  First is the continuous popup about 'Save Backup'.  I need a way of turning this off since most of the time my edits are of adhoc lookups and I don't want a backup.  Second is the can't save error message I get in Firefox.  I have to quit Firefox, restart, and then I can save.  After a number of adhoc lookups I get the error message again.  Any work arounds beyond restarting the browser?
Hi @Anurag101, It is not possible to help without knowing your data. Also, the solution depends on whether data is extracted or not.  If you can show some sample anonymized events, we can help by sa... See more...
Hi @Anurag101, It is not possible to help without knowing your data. Also, the solution depends on whether data is extracted or not.  If you can show some sample anonymized events, we can help by sample query.
Hey @scelikok,  Thanks for jumping in to look at this.  I agree that part of the problem is the typing here because it only returns the wrong lookup uID values for secondUID values that could b... See more...
Hey @scelikok,  Thanks for jumping in to look at this.  I agree that part of the problem is the typing here because it only returns the wrong lookup uID values for secondUID values that could be reasonably interpreted as numbers. And maybe this comes down to poor error handling.  I wouldn't be satisfied just writing this off to a typing issue though for multiple reasons: First (opaque), this would be completely opaque to the end user.  Maybe you have enough experience to tell me, but if it is a typing issue, then the typeof command does not return the actual types of the variables. I have to believe it returns what Splunk expects it would be. For instance in the search below, the last |like command does not remove any results.  | inputlookup kvstore_560k_lines_long max=15000 | stats count by secondUID | where count=1 AND match(secondUID, "^\d+$") | head 4000 | eval initialType=typeof(secondUID) | eval secondUID=tostring(secondUID) ```this line causes the search to return different results``` | eval subsiquentType=typeof(secondUID) | where like(initialType, subsiquentType) Second (inconsistent), casting the secondUID with tonumber() makes more wrong results be returned. So whatever is happening is not consistently happening for secondUIDs that are integers. I initially found this by rexing a dashboard token list of secondUIDs so it isn't a matter of how the variables are being initially loaded from the lookup table. This is also shown by needing the eval command to cause the error which is distilled logic from an if statement. And as far as the inconsistency goes, there is no pattern to the numbers that are treated as numbers vs strings. Its not like all numbers starting with 0s have this happen or whitespace being around some.  Third (unexpected lookup function), if you looked at any of the events they would look correct. The values for secondUID would be correct and they have results returned for uID. Unless this is just another quirk of Splunk where wrong lookups are possible without warning, I would expect the |lookup to output a null() value when it is unable to use the key it is given.  Fourth (noticing the error), because Splunk doesn't give a warning and outputs a result, anybody using dedup or stats without a predetermined output expected would likely never know this exists. The only way I noticed it was because I knew there should be a 1-1 relation between my keys and values and there wasn't at the end of the search.  Fifth (event count dependent), since there is a higher % of errors happening when the initial event count goes up without a pattern in the secondUID values being passed in other than being "\d+", this makes me worried there is a memory allocation/referencing bug behind the scenes. Something which would probably also be fixed by tostring if this was the case. Counterpoint being that it reliably returns the same wrong lookup values. Maybe it really is just poor error handling in |lookup combined with typeof() being a lie. But I don't know how to come to a definitive conclusion without having access to the Splunk code (c/c++?) behind the scenes, or analyzing Splunk while it is running to rule out an allocation/referencing issue in absence of the |lookup code... Maybe you would have other ideas on how to narrow it down. Or maybe I'm just crazy        
Hi @AL3Z, You should go to default.xml at "Settings > User Interface > Navigation" menu while in your app. Use collection type for dropdown menus like below; Please update dropdown_menu_name1 and... See more...
Hi @AL3Z, You should go to default.xml at "Settings > User Interface > Navigation" menu while in your app. Use collection type for dropdown menus like below; Please update dropdown_menu_name1 and  dashboard_name1 with your needs. You can add more menus as you wish. <nav search_view="search" color="#1D1D1B"> <view name="search" default='true' /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> <collection label="dropdown_menu_name1"> <view name="dashboard_name1"</> </collection> <collection label="dropdown_menu_name2"> <view name="dashboard_name2"</> </collection> </nav>  
Hi @scelikok  Below is my configuration of filelog receiver inside the helm chart  {{- if and (eq (include "splunk-otel-collector.logsEnabled" .) "true") (eq .Values.logsEngine "otel") }} {{- ... See more...
Hi @scelikok  Below is my configuration of filelog receiver inside the helm chart  {{- if and (eq (include "splunk-otel-collector.logsEnabled" .) "true") (eq .Values.logsEngine "otel") }} {{- if .Values.logsCollection.containers.enabled }} filelog: {{- if .Values.isWindows }} include: ["C:\\var\\log\\pods\\*\\*\\*.log"] {{- else }} include: ["/var/log/pods/*/*/*.log"] {{- end }} # Exclude logs. The file format is # /var/log/pods/<namespace_name>_<pod_name>_<pod_uid>/<container_name>/<restart_count>.log exclude: {{- if .Values.logsCollection.containers.excludeAgentLogs }} {{- if .Values.isWindows }} - "C:\\var\\log\\pods\\{{ .Release.Namespace }}_{{ include "splunk-otel-collector.fullname" . }}*_*\\otel-collector\\*.log" {{- else }} - /var/log/pods/{{ .Release.Namespace }}_{{ include "splunk-otel-collector.fullname" . }}*_*/otel-collector/*.log {{- end }} {{- end }} {{- range $_, $excludePath := .Values.logsCollection.containers.excludePaths }} - {{ $excludePath }} {{- end }} start_at: beginning include_file_path: true include_file_name: false poll_interval: 200ms # Disable force flush until this issue is fixed: # https://github.com/open-telemetry/opentelemetry-log-collection/issues/292 retry_on_failure: enabled: true {{- end }} and also when I checked the config map start_at mapped with the beginning. Still it is not showing the older logs in Splunk 
Hi, Could anyone pls help me in adding the navigation menu to the dashboard like in the pic shown below eg. Event Search.   Thanks in advance  
Hi Everyone, I want to create a new Use case to detect Suspicious activity on insecure ports from remote to local and local to remote. I didn't understand how do I write the query as source IP/Des... See more...
Hi Everyone, I want to create a new Use case to detect Suspicious activity on insecure ports from remote to local and local to remote. I didn't understand how do I write the query as source IP/Destination IP as remote. Is there any way to define the "Context" like Remote and Local?  I want to define for L2R rule destination IP should be remote and for R2L Source IP should be Remote. I have tried with the reverse condition but it didn't worked properly.  Example: For L2R I have mentioned all the Local IP network segment as not category (Source IP!= 10.0.0.0/8) and for R2L vice versa (Destination IP!=10.0.0.0/8).    Can anyone help me with this please? 
Hi @gcusello, Option 1 is the smart solution without complicating it - its working perfectly fine! Thanks for the help. 
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf # For Windows systems only. # Does not use file handles [MonitorNoHandle://<path>] * This input intercepts file writes to the sp... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf # For Windows systems only. # Does not use file handles [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. * <path> must be a fully qualified path name to a specific file. Wildcards and directories are not accepted. * This input type does not function on *nix machines. * You can specify more than one stanza of this type. disabled = <boolean> * Whether or not the input is enabled. * Default: 0 (enabled) index = <string> * Specifies the index where this input sends the data. * This setting is optional. * Default: the default index I have no experience using this myself and only learned about this in the last week.  Since you said the UF was on a Windows client this could work for you.  This would not work with a UF on *nix environments. 
Super old topic, but shocking that it seems Splunk hasn't brought this functionality into the product. Would you be open to sharing the modifications you made to incident_review.js?   Thank you
Thanks @phanTom. If anyone else comes across this in a search, I created a decision block which checks for container tags: If "tag1": go to the End block If "tag2": Continue to the next block ... See more...
Thanks @phanTom. If anyone else comes across this in a search, I created a decision block which checks for container tags: If "tag1": go to the End block If "tag2": Continue to the next block Next block applies the tag "tag1" to the container. Final block removes the tag "tag2" from the container. This design, for better or worse, allows me to run a playbook "on demand" via a Workbook or manual action on the case management side, while keeping automatic capabilities (apply label "tag2" to the container and run) if I decide to use it as say, a child playbook.
Thanks! This looks to be returning the desired info and format. Though I noticed some Policies were missing counts for certain results. The number of different values possible for 'displayName' is sh... See more...
Thanks! This looks to be returning the desired info and format. Though I noticed some Policies were missing counts for certain results. The number of different values possible for 'displayName' is showing less than is actually present in the event log. I think this may be an issue with Splunk itself and not the query though.  Would you happen to know if it's possible for the number of values to have a max or limit in Splunk? 
Frustration is the least emotional state I wanted to achieve here. Apologies! I still believe it was not me confusing people, I just wanted help with simply being able to compare two datasets and pr... See more...
Frustration is the least emotional state I wanted to achieve here. Apologies! I still believe it was not me confusing people, I just wanted help with simply being able to compare two datasets and printing out the only hosts seen in one of the data sources. if that sounds confusing then again apologies. I have been a member for some time now and always admired all the help given here. I have tested all solutions provided so far, and seen no results (it might be my fault). The only solution that provided me with the results I want was the following for all to use. I agree (do not have great knowledge around set command as of now) set might not be as efficient as other commands bet here is what worked for me. | set diff [| tstats count where source_1 by host | table host] [| tstats count where source_2 by host | table host] That SPL provides a list of all of the hosts not seen in source_2 At the end of the day it is important for people to get some working examples. They are testing and either working or not working. Silence is gold time to time and no one wants frustration waves. I was not meant to do any harm. Been in the industry for a long time enough to realize we all different and have different emotions. Please, by all means if you can create something that would prove me wrong or anything better than set command for the community to use. Thank you!
Hi @aab1, The error message shows there is an installation error, python is complaining about a missing module. Please check your installation document.  At this stage, it does not seem related to a... See more...
Hi @aab1, The error message shows there is an installation error, python is complaining about a missing module. Please check your installation document.  At this stage, it does not seem related to a certificate or firewall issue.
I am using the Sideview App trying to monitor usage by users.  There is a Pain field in the User Activity report.  Does anyone know what this Pain field is trying to show?